-
-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature request: option to stepdown after backup #39
Comments
Interesting. Could you explain what did you mean by stepdown the primary, like set it as a secondary and restart it? How do you do it in command line? Just trying to make sure I will get it right. Lately I am busy, but I would like to implement this one when I have time and know how to do it exactly. Welcome to send a PR if you want. |
Yes, that is correct. One way to do it would be I don't have any experience with |
according to the document. It seems like I only need to call db.getSiblingDB("admin").shutdownServer({ "timeoutSecs": 60 }) and it should handle stepdown and shutdown together. I do wanna know if there is a way to do restart directly from the command |
Thanks for doing this...do you know if it handles the case where From the docs:
|
I don't think so. Neither I realize this part. But I think it is not too hard to set authentication right? |
Perhaps just a note in the README explaining that auth is required would suffice? |
Agree. Does this work anyway? |
Other than my prod cluster, I don't have any other clusters that utilize both replica sets and auth...adding to that, I have yet to switch over from stefanprodan:mgob. I'll report back here once I have been able to test this out. |
Sorry for the delay. i began work on testing this out, but could not find a way to have the db password retrieved from a secret or env variable expansion. An issue was created but never addressed in the original repo (auto closed once it was archived) to allow passwords stored in secrets: stefanprodan/mgob#58 A workaround was proposed to put the entire config in a secret, but that seems burdensome and overly complicates configuration. Would you be interested in implementing this feature? I use the bitnami/mongodb chart which automatically creates a k8s secret with |
Well, I am not sure what is the best way for this one. Welcome to make a PR. I think the easiest way is making the whole config load from the secret. For me I don't have this issue, all my secrets are injected on the fly. So whether it is secret or config doesn't really matter to me. |
How do you inject secrets on the fly? Perhaps I can use that technique. |
A simple hacky approach for env var expansion I found from a quick scan: https://mtyurt.net/post/go-using-environment-variables-in-configuration-files.html The mgob config would become: target:
password: ${MONGODB_ROOT_PASSWORD} The env:
-name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mongodb-root-password
name: db-mongodb Or piggyback on the above approach, but utilize https://github.com/spf13/viper along with templates spf13/viper#315 (comment) which would enable |
what if someone has multiple mongodbs? It might work by using a init container to load the secret and update the config before running the main part. However, I don't have time do work on this right now. Welcome to make a PR tho. |
@jamesholcomb I think I got what you want. you can use env variable like PLAN-ID_AZURE_CONNECTIONSTRING. Give it a try docker pull maxisam/mgob:dev.225 |
Sure, will give this a try. I see you added viper to read the config in #80. Is this using env var expansion or templates or ...? Please show an example if you can. |
mgob/.github/workflows/build.yml Line 102 in 81b39b0
it just use environment variable so far. env name format is like ex |
How would I would inject a mongodb password from a k8s secret with this technique? |
Background
I have a mongodb@4.4 three member replica set running on k8s and automated backups using mgob. Its memory limit (400M) is set to the max operational amount based on the working set to save $. This works great until the nightly backup is executed. When a backup completes, the primary memory spikes and will eventually do one of two things:
My efforts to figure out how to limit mongodb memory usage have been fruitless...I've tried changing the wired tiger cache limits, increasing memory limits, etc. The cluster is maxed to point where I would have to spin up more nodes ($) to increase memory further.
I'd like to propose a an option to stepdown the primary after successful dump. This is what I end up doing after seeing the primary is getting close to OOMKilled or performance goes to hell.
What do you think?
Thank you for taking on the maintenance of this critical piece of infra!
The text was updated successfully, but these errors were encountered: