-
Notifications
You must be signed in to change notification settings - Fork 88
Support N MongoDB shards #2219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development/2.12
Are you sure you want to change the base?
Support N MongoDB shards #2219
Conversation
Hello williamlardier,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
|
365e3d4
to
b1d1cdc
Compare
solution-base/mongodb/charts/mongodb-sharded/templates/shard/shard-data-statefulset.yaml
Outdated
Show resolved
Hide resolved
solution-base/mongodb/charts/mongodb-sharded/templates/shard/shard-data-podmonitor.yaml
Outdated
Show resolved
Hide resolved
eaac5b7
to
87c3664
Compare
We add support for multiple shards. The Kustomization file is thus removed, and we instead generate it based on the current configuration. Issue: ZENKO-4641
Added to Zenko as an annotation, but now compatible with multiple shards. Issue: ZENKO-4641
This implementation tries to use a single MongoDB mongod process per instance so that we maximize RAM usage and performances. Issue: ZENKO-4641
Selectors should now be updated to consider the current shard Issue: ZENKO-4641
Issue: ZENKO-4641
- Mutualize http tests in a single CI run. - All deployments must now be sharded, hence testing separately http endpoints is not enough. - We also mutualize the runner to enable HTTPs after the initial deployments. This reduces costs, and ensure basic set of tests are executed when using 2 shards. Issue: ZENKO-4641
We useed to re-run Vault functional tests in Zenko. This is not needed, as covered by CTST test suite now. Issue: ZENKO-4641
- We want to support one P-S-S topology ever 3 servers. - With 6+ nodes support added, we now need to ensure the replicas of configsvr & shardsvr are not exceeding 3. Issue: ZENKO-4641
Will be useful for CI testing only Issue: ZENKO-4641
The previous alert was not properly accounting for multiple shards, leading to alerts with multi shards even if we have a nominal state. Issue: ZENKO-4641
0db4b65
to
d3be7fc
Compare
Issue: ZENKO-4641
d3be7fc
to
d3f8a9d
Compare
@@ -137,24 +191,6 @@ build_solution_base_manifests() { | |||
sed -i "s/MONGODB_SHARDSERVER_RAM_LIMIT/${MONGODB_SHARDSERVER_RAM_LIMIT}/g" $DIR/_build/root/deploy/* | |||
sed -i "s/MONGODB_SHARDSERVER_RAM_REQUEST/${MONGODB_SHARDSERVER_RAM_REQUEST}/g" $DIR/_build/root/deploy/* | |||
sed -i "s/MONGODB_MONGOS_RAM_REQUEST/${MONGODB_MONGOS_RAM_REQUEST}/g" $DIR/_build/root/deploy/* | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for reviewers: this was removed because we had this logic twice in the function, maybe a rebase issue at some point. See above.
# MongoDB selectors are not supported in the CI. | ||
# So we remove them and let the provisioner handle the | ||
# volume provisioning. | ||
patch_mongodb_selector() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does the same as the original kustomize, with a few differences:
- handles multiple shards seamlessly (as was your goal i guess)
- does not overwrite/set the storageclass to
standard
(not sure if it is needed) - does not set the requested storage to
8Gi
(how much gets requested? is it enough?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The base has 10Gi, which is enough, no need to change it.
does not overwrite/set the storageclass to standard (not sure if it is needed)
It's done but elsewhere (it was the case already before, we were doing it twice):
sed -i 's/MONGODB_STORAGE_CLASS/standard/g' $DIR/_build/root/deploy/*
handles multiple shards seamlessly (as was your goal i guess)
yes!
f1098b9
to
fa66847
Compare
fa66847
to
9062ee2
Compare
9062ee2
to
440c8b5
Compare
Added a commit to remove the matchLabel for the shard0, so that we do not require any downtime during shard expansion. |
440c8b5
to
8944786
Compare
To avoid having to delete the STS for existing deployments, we must avoid changing the matchSelectors for the shard 0. Removing only the shard selector is not enough, as we risk selecting volumes from other shards. Instead, we choose to modify the app name to include, only for new shards, the new shard ID. Shard 0 won't be updated, and the new shards will have their own labels. Issue: ZENKO-4641
Issue: ZENKO-4641
8944786
to
ed4be0c
Compare
Here, we only support deployment of multiple shards. Upgrades is managed by upper layers for now.
Issue: ZENKO-4641