-
Notifications
You must be signed in to change notification settings - Fork 260
feat: v2 swift ipampool #2422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: v2 swift ipampool #2422
Conversation
57fdbc9 to
c0ceb7b
Compare
8d4b17a to
57edeeb
Compare
Merge queue setting changed
7226270 to
bc4e879
Compare
1b01e20 to
92c4f63
Compare
nddq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments, I'm wondering if we could try and do some stress tests on an actual cluster (or you have done it already 🙂)
I don't have numbers, but I validated that it works as expected and reaches target state faster for big swings in Pod scheduling 🙂 |
nddq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
|
going to move this one forward, and consider the feedback while running it for real to refine it. |
Reason for Change:
v2 IPAM Pool Monitor introduces idempotent scaling math, migrates to an event-driven instead of polling architecture, and leverages the Pod watcher for IP demand instead of using the "free IPs in the pool" to trigger scaling.
Notably this change improves pool scaling performance from O(n) to O(1).
Testing on a single Linux node DS2v2 with a 100 Pod pause container scale-up yields:
Issue Fixed:
Requirements:
Notes: not to be confused with SwiftV2, this is applicable to all dynamic Pod IP Swift scenarios
Design docs for reference: #2013