-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enterprise-metrics] update ingester podManagementPolicy to Parallel #920
Conversation
a68612c
to
76d356c
Compare
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
76d356c
to
e8504af
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change LGTM, but keep in mind you can't change the pod management policy on a statefulset. You have to recreate it, which may be very annoying for customers!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm
Thx for reviewing. Yep, that's the only "con" which I can see with this change, but I guess we'll just have to make sure that our support and solution engineering teams are aware. |
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
48b21b2
to
5eff64a
Compare
I had to force-push because I forgot to DCO sign a commit, apparently this has invalidated the approvals which this PR already had (at least I think this is why they are now not counted anymore). |
This is a request for feedback, I'm curious if others in the team agree that the pros outweigh the cons.
I think we should update the ingester
podManagementPolicy
toParallel
.Pros:
0
(to cut off ingestion)0
podManagementPolicy
is on relevant when scaling the SFS up/down, not when rolling out updatesCons:
--cascade=false
and then recreated. This is risky, because if a customer fails to specify--cascade=false
their write path will go down.