New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement overflow placement for JetStream streams. #2771
Conversation
d889de0
to
d96e3f3
Compare
This allows stream placement to overflow to adjacent clusters. We also do more balanced placement based on resources (store or mem). We can continue to expand this as well. We also introduce an account requirement that stream configs contain a MaxBytes value. We now track account limits and server limits more distinctly, and do not reserver server resources based on account limits themselves. Signed-off-by: Derek Collison <derek@nats.io>
d96e3f3
to
52da55c
Compare
server/jetstream_cluster.go
Outdated
// If we have additional clusters to try we can retry. | ||
if ci != nil && len(ci.Alternates) > 0 { | ||
if rg := js.createGroupForStream(ci, cfg); rg != nil { | ||
s.Warnf("Retrying cluster placement for stream '%s > %s'", result.Account, result.Stream) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should probably increment some stat counter when this happens and maybe send an advisory, and maybe clarify why this being retried here rather than just saying its being retried. And include where the placement is being done instead in the log (and advisory)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is on the meta leader, and placement is most likely elsewhere, and determined by the server that housed the client request originally. So not sure what value there is there.
Also, the reason for moving will be resource limitations, which should be well known through monitoring and observability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, maybe just some extra info in the log then to indicate its this, a user who come across this for the first time would not have any idea what it relates to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly if they were doing no monitoring at all of their system to see it ran out of resources. What would you add to the description that you think would be helpful in a system like NGS? Or any super cluster tbh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrying cluster placement for stream 'X > Y'` due to insufficient resources in cluster C
? probably dont need to log the cluster name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That works, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note though that this is the not happy path where the meta-leader guesses wrong around placement, either due to concurrent requests, dropped updates around reserved or real usage etc.
In the happy path we have no logging at all atm.
Memory uint64 `json:"memory"` | ||
Store uint64 `json:"storage"` | ||
ReservedMemory uint64 `json:"reserved_memory"` | ||
ReservedStore uint64 `json:"reserved_storage"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can these changes really be done at this point given semver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed yesterday, not totally sure yet, but the reserved portions were incorrect and needed to be fixed. The one issue identified would be the nats cli most likely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it deliberate that you removed the ,omitempty
? (for both ReservedMemory and ReservedStore)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you comment on this question: "Is it deliberate that you removed the ,omitempty? (for both ReservedMemory and ReservedStore)"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it was deliberate based on feedback and my usage from varz/jsz and statusz stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
Signed-off-by: Derek Collison <derek@nats.io>
Memory uint64 `json:"memory"` | ||
Store uint64 `json:"storage"` | ||
ReservedMemory uint64 `json:"reserved_memory"` | ||
ReservedStore uint64 `json:"reserved_storage"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it deliberate that you removed the ,omitempty
? (for both ReservedMemory and ReservedStore)
Signed-off-by: Derek Collison <derek@nats.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Derek Collison <derek@nats.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This allows stream placement to overflow to adjacent clusters.
We also do more balanced placement based on resources (store or mem). We can continue to expand this as well.
We also introduce an account requirement that stream configs contain a MaxBytes value.
We now track account limits and server limits more distinctly, and do not reserver server resources based on account limits themselves.
Signed-off-by: Derek Collison derek@nats.io
/cc @nats-io/core