Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configurable MaxRequestsPerConn cluster param #113

Merged
merged 1 commit into from
Feb 16, 2023

Conversation

avelanarius
Copy link
Member

Before this change, a connection had a hardcoded maximum number of streams. Since this value was so large (32768 for CQL v3+) this essentially meant an unbound concurrency.

This commit allows a user to configure this parameter by a newly added MaxRequestsPerConn option.

The "MaxRequestsPerConn" naming was chosen to be very general deliberately to allow us to safely introduce a more advanced behavior in the future: where the concurrency limiting happens not at a IDGenerator level and an additional client-side queue is introduced for requests to wait to be sent to Scylla (in addition to inflight requests queue).

This commit deliberately does not change the defaults as to be non-pessimizing for the users.

Fixes #112

@avelanarius
Copy link
Member Author

Limitations:

  • Not well tested yet
  • Defaults not changed
  • maxStreams (MaxRequestsPerConn) has to be a multiple of 64. The issue is not only in this division: maxStreams / 64, but in the whole structure of the code which assumes that this is a multiple of 64 (their own custom implementation of bitset). Modifying the implementation will take some time and add small potential risk for bugs.

@nuivall
Copy link
Member

nuivall commented Feb 15, 2023

What about 64 multiple limitation you mentioned, other value will break the code? If so perhaps rounding is needed or panic check somewhere.

nuivall
nuivall previously approved these changes Feb 15, 2023
@avelanarius
Copy link
Member Author

What about 64 multiple limitation you mentioned, other value will break the code? If so perhaps rounding is needed or panic check somewhere.

First of all, this will cause trouble:

buckets := maxStreams / 64

as too few buckets will get allocated.

An immediate solution would be to round it up: (maxStreams + 63) / 64, but this code will be not aware that the last bucket is smaller than the others:

func (s *IDGenerator) GetStream() (int, bool) {
// based closely on the java-driver stream ID generator
// avoid false sharing subsequent requests.
offset := atomic.LoadUint32(&s.offset)
for !atomic.CompareAndSwapUint32(&s.offset, offset, (offset+1)%s.numBuckets) {
offset = atomic.LoadUint32(&s.offset)
}
offset = (offset + 1) % s.numBuckets
for i := uint32(0); i < s.numBuckets; i++ {
pos := int((i + offset) % s.numBuckets)
bucket := atomic.LoadUint64(&s.streams[pos])
if bucket == math.MaxUint64 {
// all streams in use
continue
}
for j := 0; j < bucketBits; j++ {
mask := uint64(1 << streamOffset(j))
for bucket&mask == 0 {
if atomic.CompareAndSwapUint64(&s.streams[pos], bucket, bucket|mask) {
atomic.AddInt32(&s.inuseStreams, 1)
return streamFromBucket(int(pos), j), true
}
bucket = atomic.LoadUint64(&s.streams[pos])
}
}
}
return 0, false
}

To get this PR pushed out quickly I decided not to fix this at the moment.

@piodul
Copy link
Collaborator

piodul commented Feb 16, 2023

An immediate solution would be to round it up: (maxStreams + 63) / 64, but this code will be not aware that the last bucket is smaller than the others

Maybe we could just round up and then set the unused bits to 1 in the last bucket if maxStreams % 64 != 0?

@nuivall
Copy link
Member

nuivall commented Feb 16, 2023

An immediate solution would be to round it up: (maxStreams + 63) / 64, but this code will be not aware that the last bucket is smaller than the others

Maybe we could just round up and then set the unused bits to 1 in the last bucket if maxStreams % 64 != 0?

that's good, and if you don't want to change the code maybe for now just the first part (round up) so that e.g. MaxRequestsPerConn = 54 will be treated as = 64.

Before this change, a connection had a hardcoded maximum number of
streams. Since this value was so large (32768 for CQL v3+) this
essentially meant an unbound concurrency.

This commit allows a user to configure this parameter by a newly
added MaxRequestsPerConn option.

The "MaxRequestsPerConn" naming was chosen to be very general
deliberately to allow us to safely introduce a more advanced behavior in
the future: where the concurrency limiting happens not at a IDGenerator
level and an additional client-side queue is introduced for requests
to wait to be sent to Scylla (in addition to inflight requests).

This commit deliberately does not change the defaults as to be
non-pessimizing for the users.

Fixes scylladb#112
@avelanarius
Copy link
Member Author

In the amended commit, added rounding up of MaxRequestsPerConn to the nearest multiple of 64.

I think this now can be merged. A follow-up PR could relax the requirement that maxStream has to be a multiple of 64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Load balancing is useless unless we allow configuring the conn.maxStreams value
3 participants