-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
introduces configurable dispatch hashring replication factor #1227
Conversation
@@ -108,6 +111,10 @@ type Config struct { | |||
TelemetryInterval time.Duration | |||
} | |||
|
|||
const defaultBackendsPerKey = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be nice to make this configurable too while we're here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does that have practical use in SpiceDB? it would cause multiple SpiceDB nodes to have the same key range, which will reduce the likelihood for cache hits, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not very useful with smaller clusters, but with larger clusters it lets you spread the load for a hot key between multiple nodes
dd0f1d8
to
47f0722
Compare
so that we get most of the functionality of the server turned on
47f0722
to
5481d3d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
this new CLI allows parameterizing the replication factor used in hashring used by the dispatch client-side gRPC balancer. This was a bit more involved as the balancer registry is a global singleton meant to be registered on an init block. This would have required moving flag handling outside of cobra, which is not ideal. As a solution I changed how the balancer is named, appending the replication factor to the name, and introducing a lazy-load mechanism with a mutex: - if a balancer with a specific replication factor does not exist, lock is acquired and the balancer is registered. This avoids races if multiple spicedb servers are running in the same process - if a balancer exists already, it's not registered - we do not unregister the balancer when the server is wind down, as it could affect another spicedb instance in the process using the same replication factor
5481d3d
to
34a4202
Compare
this new CLI allows parameterizing the replication factor
used in hashring used by the dispatch client-side gRPC balancer.
This was a bit more involved as the balancer registry is a global
singleton meant to be registered on an init block. This would have
required moving flag handling outside of cobra, which is not ideal.
As a solution I changed how the balancer is named, appending the replication
factor to the name, and introducing a lazy-load mechanism with a mutex:
acquired and the balancer is registered. This avoids races if multiple
spicedb servers are running in the same process
it could affect another spicedb instance in the process using
the same replication factor