Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(k8s): Added Kubernetes ConfigMap storage backend #142

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ThomasK33
Copy link

I added a ConfigMap-based storage backend.

In a Kubernetes environment with multiple replicas attempting to perform IPAM at the same time, using in-memory storage can lead to race conditions. Alternatively, using an external database would require managing an additional component. To avoid these issues, ConfigMap-based storage can be used to keep the state in Kubernetes, allowing all replicas to access the same configuration/state.

@ThomasK33 ThomasK33 requested a review from a team as a code owner March 26, 2024 13:28
@majst01
Copy link
Contributor

majst01 commented Mar 26, 2024

Hi @ThomasK33,

thanks for you contribution. Some comments on this.

  • i can not see on the first place how this approach prevents race conditions, there is no "leader-election" or similar in the write calls
  • for the sake of simple day-two-operations, we have the https://github.com/metal-stack/backup-restore-sidecar which starts postgres or redis as a persistence backend and creates backups and manages restore in case of catastrophic failures.
  • The tests actually only cover the Create and Update, all other cases are not covered.
  • Why store the prefixes as key in the configmap data, storing all as json would simplify the code and does not require to replace "/" with "-" for example.

WDYT ?

@ThomasK33
Copy link
Author

Hey, thanks for the quick feedback.

i can not see on the first place how this approach prevents race conditions, there is no "leader-election" or similar in the write calls

Kubernetes objects have a resourceVersion in their metadata section. Say multiple replicas fetch an object at the same time and then try updating it simultaneously. Then the apiserver will accept one update and reject the two other updates, as those "older" updates do not contain a newer resourceVersion. (This is similar to the optimistic locking in the codebase here.) So, while it's not a leader election per se, it has a distributed write pattern that will fail on parallel writes with older resourceVersion so that a reconciler will eventually retry.

for the sake of simple day-two-operations, we have the metal-stack/backup-restore-sidecar which starts postgres or redis as a persistence backend and creates backups and manages restore in case of catastrophic failures.

Yeah, that looks nice. In this particular case, it's about being able to embed go-ipam into an existing Go application running in a Kubernetes cluster. So, it's not possible to add yet another component.

The tests actually only cover the Create and Update, all other cases are not covered.

Great callout, thanks. I completely missed the testing_test.go file containing all the storage backends and just copied and pasted the unit tests of the in-memory implementation for this one.

I added the config map storage to the testing_test.go file and removed the hand-written unit tests.

Why store the prefixes as key in the configmap data, storing all as json would simplify the code and does not require to replace "/" with "-" for example.

Yep, that was quick & dirty based on the etcd implementation.
I've refactored it to store prefixJson maps into configmap keys so that the configmap keys in data represent the namespaces, and the objects inside are a mapping of prefix CIDR --> prefixJson.

@ThomasK33
Copy link
Author

Hey @majst01, I just saw that the linter failed at missing json tags:

CleanShot 2024-03-27 at 14 41 55@2x

Would you like me to add JSON tags to all the fields?
Also, should those be the Go-style PascalCase or camelCase tags?

@majst01
Copy link
Contributor

majst01 commented Apr 2, 2024

Hey @majst01, I just saw that the linter failed at missing json tags:

CleanShot 2024-03-27 at 14 41 55@2x

Would you like me to add JSON tags to all the fields? Also, should those be the Go-style PascalCase or camelCase tags?

I already created a PR for this: #136 so no need for you to take care of that.

@ThomasK33 ThomasK33 force-pushed the master branch 2 times, most recently from 415f707 to 7dd831b Compare April 9, 2024 14:07
@majst01
Copy link
Contributor

majst01 commented Jun 11, 2024

#136 is merged now, please pull master and i will review

@ThomasK33 ThomasK33 force-pushed the master branch 2 times, most recently from 285b5f7 to d6f5964 Compare June 12, 2024 09:11
prefix.go Outdated Show resolved Hide resolved
prefix_test.go Outdated Show resolved Hide resolved
prefix_test.go Outdated Show resolved Hide resolved
prefix_test.go Outdated Show resolved Hide resolved
prefix_test.go Outdated Show resolved Hide resolved
testing_test.go Outdated Show resolved Hide resolved
testing_test.go Outdated Show resolved Hide resolved
testing_test.go Outdated Show resolved Hide resolved
testing_test.go Outdated Show resolved Hide resolved
testing_test.go Outdated Show resolved Hide resolved
kube-configmap.go Show resolved Hide resolved
kube-configmap.go Outdated Show resolved Hide resolved
@majst01
Copy link
Contributor

majst01 commented Jun 13, 2024

We are almost there, could you please add the newly supported backend to the Readme.md ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants