-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/mongodb-sharded] Root user doesn't get created during a new installation #13364
Comments
Hello @FeiYing9, Thank you very much for your contribution. I have created a task internally to review this error. We currently have a lot on our plate. For this reason, we will notify you in this same issue when the error has been solved. |
Hey @corico44. Are you able to reproduce this issue on your system at all? |
Sounds good. Thanks! |
Ran into exactly the same issue, seems the init script kicks in as soon as tcp socket is open, but Monod may not fully be startet up (so may happens just on slower machines like my test cluster is?!) - for me it seems to fail in mongodb_sharded_reconfigure_svr_primary inside /opt/bitnami/scripts/libmongodb-sharded.sh Log from configsvr pod:
|
Hey @blade5502. What PV types are you using? This happens for me on GlusterFS, so I’m interested if it happens in other file systems too. |
It's a single host k3s test instance with local file provisioner (backed by spinning rust/HDDs) Will try another installation on my real cluster with ceph PV later EDIT: Identical outcome as above on real cluster - no custom values.yml, just helm install mongodb bitnami/mongodb-sharded |
Hi,
Am I missing something ? |
Hey. The problem with this issue is that I can only reproduce the error on a test cluster I have access to. However, I tried using minikube locally and it worked perfectly fine. That's why I can't pinpoint the exact issue, as it is hard to debug. This issue might not be with the chart at all, but I wanted to file this to get some guidance on it (although someone else was able to reproduce it). I also tested with chart version |
I've spun up a signle node minikube cluster - chart deploys fine in there - within both k3s clusters the chart fails as stated above Versions used: |
Hey @rafariossaa, were you able to test this with a multi-node cluster yet? That could be the issue that is happening. |
Hi, |
I have just tried in a GKE cluster (v1.23.13-gke.900) and I got not issue. |
Hi @dung-tien-nguyen , |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
Has there been any additional work on this? I unfortunately have the same issue when trying to use the sharded chart. The standard replicaset chart works as expected. |
Hi, There was a contribution but we didn't hear back bitnami/containers#24938 |
@javsalgar - Since this issue is closed, would you guys prefer I reopen another issue with my details? |
Hi, Yes, I think that would be better |
Hello, I am experiencing an issue similar to what's described in this thread when trying to deploy a MongoDB replication cluster using the Bitnami charts. I have a PVC from a previous deployment that I'm trying to reuse in the new setup. Upon installing the new cluster, the secondary nodes fail to join the primary node. Additionally, when attempting to connect to the secondary nodes via CLI, I encounter authentication errors. I have checked the logs on the secondary nodes and confirmed they are attempting to connect to the primary but are unable to authenticate successfully. I receive Authentication error when trying to connect to the secondary nodes It seems that the secondaries are unable to use the credentials provided to join the existing replica set, despite following the documentation for the upgrade process and ensuring that the PVC retains the data from the previous deployment. Could you provide guidance on how to resolve this issue? Specifically, I am looking for steps to ensure that the secondary nodes can authenticate with the primary and join the replica set without any issues, utilizing the existing PVC. Thank you. |
Hi! As this issue is very old, would you mind opening a new ticket referencing this one? |
Name and Version
bitnami/mongodb-sharded 6.1.10
What steps will reproduce the bug?
global.storageClass=<storageClass name>
,volumePermissions.enabled=true
,auth.enabled=true
,auth.rootUser=root
,auth.existingSecret=<existing-secret>
. Alternatively, you can provide therootPassword
instead of the secret.Authentication Failed
after the config server restarts MongoDB. The config server logs say that it did not find a user namedroot
in theadmin
database.Are you using any custom parameters or values?
What is the expected behavior?
I expect for the chart to correctly make the root user so the pods can authenticate correctly after installed.
What do you see instead?
Here are the logs for both the config server and mongos (with
image.debug=true
). The config server restarts after stopping the database, but I've included the logs from after the restart as well.mongodb-sharded-mongos
Additional information
This is pretty much the same issue as this issue I created before. However, now I have a lot more information and have narrowed down the steps to reproduce it.
First, I have tried with many different options, including changing the ports, disabling host names, changing persistence access modes, etc. I've also run the pod in
diagnosticMode
a few times to try to get a better look at the issue, but that didn't help very much in pinpoint the issue.I was looking at the scripts used to set up the database in the
bitnami/containers
repository. I noticed the hostname provided when creating the root user is127.0.0.1
(link to line of code). I built my own image and removed the hostname so the script would automatically retrieve it (the hostname wasmongodb-sharded-configsvr
). That fixed an issue I previously got in the logs (it can be seen on the previous issue), but the root user still never seemed to be created.This makes me think that either the
createUser
function does not succeed and the script never detects/logs the error or the data is not written correctly into the filesystem, which means that stopping the database to enable authentication would erase the user along with it since it never was written to a file.Please let me know if you have any suggestions on how to fix this. Also, let me know any more information I could provide that would be helpful. By the way, I tested the
bitnami/mongodb
chart and that worked perfectly fine, so there aren't direct incompatibilities between my k8s cluster and MongoDB.The text was updated successfully, but these errors were encountered: