Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/mongodb-sharded] Root user doesn't get created during a new installation #13364

Closed
jdholtz opened this issue Nov 5, 2022 · 32 comments
Closed
Assignees
Labels
mongodb-sharded stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed
Projects

Comments

@jdholtz
Copy link
Contributor

jdholtz commented Nov 5, 2022

Name and Version

bitnami/mongodb-sharded 6.1.10

What steps will reproduce the bug?

  1. Install MongoDB sharded using a manual provisioner storage class with GlusterFS PVs
  2. Provide these values to the chart: global.storageClass=<storageClass name>, volumePermissions.enabled=true, auth.enabled=true, auth.rootUser=root, auth.existingSecret=<existing-secret>. Alternatively, you can provide the rootPassword instead of the secret.
  3. All pods get to running status, but if you check in the logs, mongos says Authentication Failed after the config server restarts MongoDB. The config server logs say that it did not find a user named root in the admin database.

Are you using any custom parameters or values?

global.storageClass=storageClass name
auth.enabled=true
auth.rootUser=root
auth.existingSecret=existing-secret or rootPassword=password

What is the expected behavior?

I expect for the chart to correctly make the root user so the pods can authenticate correctly after installed.

What do you see instead?

Here are the logs for both the config server and mongos (with image.debug=true). The config server restarts after stopping the database, but I've included the logs from after the restart as well.

 03:32:58.74 INFO  ==> Setting node as primary
mongodb 03:32:58.78
mongodb 03:32:58.78 Welcome to the Bitnami mongodb-sharded container
mongodb 03:32:58.78 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 03:32:58.79 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 03:32:58.79
mongodb 03:32:58.79 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 03:32:58.83 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 03:32:58.91 INFO  ==> Initializing MongoDB Sharded...
mongodb 03:32:58.95 INFO  ==> Deploying MongoDB Sharded from scratch...
mongodb 03:32:58.97 DEBUG ==> Starting MongoDB in background...
{"t":{"$date":"2022-11-05T03:32:59.035Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"-","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":true}}}}
about to fork child process, waiting until server is ready for connections.
forked process: 57
child process started successfully, parent exiting
MongoNetworkError: connect ECONNREFUSED 172.16.164.215:27017
mongodb 03:33:21.96 DEBUG ==> Validating 127.0.0.1 as primary node...
mongodb 03:33:23.71 DEBUG ==> Starting MongoDB in background...
mongodb 03:33:23.72 INFO  ==> Creating users...
mongodb 03:33:23.72 INFO  ==> Creating root user...
Current Mongosh Log ID:	6365d9843976813d30cd0579
Connecting to:		mongodb://mongodb-sharded-configsvr-0.mongodb-sharded-headless.mogno-test.svc.cluster.local:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.16.164.215:27017
mongodb 03:33:24.80 INFO  ==> Stopping MongoDB...

The pod then restarts
<br>
mongodb 03:33:51.29
mongodb 03:33:51.29 Welcome to the Bitnami mongodb-sharded container
mongodb 03:33:51.29 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 03:33:51.29 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 03:33:51.30
mongodb 03:33:51.30 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 03:33:51.33 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 03:33:51.41 INFO  ==> Initializing MongoDB Sharded...
mongodb 03:33:51.48 INFO  ==> Writing keyfile for replica set authentication...
mongodb 03:33:51.50 INFO  ==> Enabling authentication...
mongodb 03:33:51.51 INFO  ==> Deploying MongoDB Sharded with persisted data...

mongodb 03:33:51.54 INFO  ==> ** MongoDB Sharded setup finished! **
mongodb 03:33:51.57 INFO  ==> ** Starting MongoDB **
{"t":{"$date":"2022-11-05T03:33:51.640Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"-","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":false}}}}

mongodb-sharded-mongos

mongodb 03:35:19.23
mongodb 03:35:19.23 Welcome to the Bitnami mongodb-sharded container
mongodb 03:35:19.24 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 03:35:19.24 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 03:35:19.24
mongodb 03:35:19.25 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 03:35:19.28 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 03:35:19.35 INFO  ==> Initializing Mongos...
mongodb 03:35:19.35 INFO  ==> Writing keyfile for replica set authentication...
mongodb 03:35:19.38 DEBUG ==> Waiting for primary node...
mongodb 03:35:19.38 DEBUG ==> Waiting for primary node...
mongodb 03:35:19.39 INFO  ==> Trying to connect to MongoDB server mongodb-sharded-configsvr-0.mongodb-sharded-headless.mongo-test.svc.cluster.local...
mongodb 03:35:19.39 INFO  ==> Found MongoDB server listening at mongodb-sharded-configsvr-0.mongodb-sharded-headless.mongo-test.svc.cluster.local:27017 !
MongoServerError: Authentication failed.

Additional information

This is pretty much the same issue as this issue I created before. However, now I have a lot more information and have narrowed down the steps to reproduce it.

First, I have tried with many different options, including changing the ports, disabling host names, changing persistence access modes, etc. I've also run the pod in diagnosticMode a few times to try to get a better look at the issue, but that didn't help very much in pinpoint the issue.

I was looking at the scripts used to set up the database in the bitnami/containers repository. I noticed the hostname provided when creating the root user is 127.0.0.1 (link to line of code). I built my own image and removed the hostname so the script would automatically retrieve it (the hostname was mongodb-sharded-configsvr). That fixed an issue I previously got in the logs (it can be seen on the previous issue), but the root user still never seemed to be created.

This makes me think that either the createUser function does not succeed and the script never detects/logs the error or the data is not written correctly into the filesystem, which means that stopping the database to enable authentication would erase the user along with it since it never was written to a file.

Please let me know if you have any suggestions on how to fix this. Also, let me know any more information I could provide that would be helpful. By the way, I tested the bitnami/mongodb chart and that worked perfectly fine, so there aren't direct incompatibilities between my k8s cluster and MongoDB.

@jdholtz jdholtz added the tech-issues The user has a technical issue about an application label Nov 5, 2022
@bitnami-bot bitnami-bot added this to Triage in Support Nov 5, 2022
@github-actions github-actions bot added the triage Triage is needed label Nov 5, 2022
@jdholtz jdholtz changed the title [bitnami/mongodb-sharded] User doesn't get created during a new installation [bitnami/mongodb-sharded] Root user doesn't get created during a new installation Nov 5, 2022
@javsalgar javsalgar moved this from Triage to In progress in Support Nov 7, 2022
@github-actions github-actions bot added in-progress and removed triage Triage is needed labels Nov 7, 2022
@bitnami-bot bitnami-bot assigned corico44 and unassigned javsalgar Nov 7, 2022
@corico44
Copy link
Contributor

Hello @FeiYing9,

Thank you very much for your contribution. I have created a task internally to review this error. We currently have a lot on our plate. For this reason, we will notify you in this same issue when the error has been solved.

@corico44 corico44 moved this from In progress to On hold in Support Nov 17, 2022
@github-actions github-actions bot moved this from On hold to Pending in Support Nov 17, 2022
@github-actions github-actions bot added on-hold Issues or Pull Requests with this label will never be considered stale and removed in-progress labels Nov 17, 2022
@jdholtz
Copy link
Contributor Author

jdholtz commented Nov 23, 2022

Hey @corico44. Are you able to reproduce this issue on your system at all?

@corico44
Copy link
Contributor

Hello @jdholtz,

I wasn't able to reproduce it correctly. Also, since an issue was already opened in the past (#11715) and other colleagues tried to test it, I have created a ticket internally so that this problem can be fixed. We will notify you in this same issue when the error has been solved.

@jdholtz
Copy link
Contributor Author

jdholtz commented Nov 30, 2022

Sounds good. Thanks!

@blade5502
Copy link

blade5502 commented Dec 13, 2022

Ran into exactly the same issue, seems the init script kicks in as soon as tcp socket is open, but Monod may not fully be startet up (so may happens just on slower machines like my test cluster is?!) - for me it seems to fail in mongodb_sharded_reconfigure_svr_primary inside /opt/bitnami/scripts/libmongodb-sharded.sh

Log from configsvr pod:

mongodb 00:42:55.43 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 00:42:55.49 INFO  ==> Initializing MongoDB Sharded...
mongodb 00:42:56.03 INFO  ==> Deploying MongoDB Sharded from scratch...
MongoNetworkError: connect ECONNREFUSED 10.42.0.127:27017**
mongodb 00:43:28.69 INFO  ==> Creating users...
mongodb 00:43:28.69 INFO  ==> Creating root user...
Current Mongosh Log ID:	6397cab249a5ec426a90318d
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0
Using MongoDB:		6.0.3
Using Mongosh:		1.6.0

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

/snip - removed unnecessary mongosh output

mongodb-sharded-1670892133-configsvr [direct: secondary] test> Uncaught 
MongoServerError: not primary
mongodb 00:43:31.83 INFO  ==> Users created
mongodb 00:43:31.83 INFO  ==> Writing keyfile for replica set authentication...
mongodb 00:43:31.86 INFO  ==> Enabling authentication...
mongodb 00:43:32.06 INFO  ==> Configuring MongoDB Sharded replica set...
mongodb 00:43:32.07 INFO  ==> Stopping MongoDB...
mongodb 00:44:16.18 INFO  ==> Configuring MongoDB primary node...: mongodb-sharded-1670892133-configsvr-0.mongodb-sharded-1670892133-headless.default.svc.cluster.local
MongoServerError: Authentication failed.
MongoServerError: Authentication failed.

@jdholtz
Copy link
Contributor Author

jdholtz commented Dec 13, 2022

Hey @blade5502. What PV types are you using? This happens for me on GlusterFS, so I’m interested if it happens in other file systems too.

@blade5502
Copy link

blade5502 commented Dec 13, 2022

It's a single host k3s test instance with local file provisioner (backed by spinning rust/HDDs)

Will try another installation on my real cluster with ceph PV later

EDIT: Identical outcome as above on real cluster - no custom values.yml, just helm install mongodb bitnami/mongodb-sharded

@rafariossaa
Copy link
Contributor

Hi,
I am trying to reproduce the issue using minikube and chart version 6.2.0, but for me it is working as expected: (the values.yaml is the default one)

$ helm install mymongo -f values.yaml --set auth.rootPassword=testpwd .
...
mmongodb 11:54:28.89
mmongodb 11:54:28.89 Welcome to the Bitnami mongodb-sharded container
mmongodb 11:54:28.89 Subscribe to project updates by watching https://github.com/bitnami/containers
mmongodb 11:54:28.89 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mmongodb 11:54:28.89
mmongodb 11:54:28.89 INFO  ==> ** Starting MongoDB Sharded setup **
mmongodb 11:54:28.90 INFO  ==> Validating settings in MONGODB_* env vars...
mmongodb 11:54:28.94 INFO  ==> Initializing Mongos...
mmongodb 11:54:28.94 INFO  ==> Writing keyfile for replica set authentication...
mmongodb 11:54:28.95 INFO  ==> Trying to connect to MongoDB server mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local...
cannot resolve host "mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local": lookup mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local: no such host
cannot resolve host "mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local": lookup mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local: no such host
cannot resolve host "mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local": lookup mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local: no such host
cannot resolve host "mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local": lookup mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local: no such host
cannot resolve host "mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local": lookup mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local: no such host
mmongodb 11:54:54.39 INFO  ==> Found MongoDB server listening at mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local:27017 !
mmongodb 11:54:55.87 INFO  ==> MongoDB server listening and working at mymongo-mongodb-sharded-configsvr-0.mymongo-mongodb-sharded-headless.default.svc.cluster.local:27017 !
mmongodb 11:54:57.42 INFO  ==> Primary node ready.
mmongodb 11:54:57.43 INFO  ==> ** MongoDB Sharded setup finished! **

mmongodb 11:54:57.44 INFO  ==> ** Starting MongoDB **
{"t":{"$date":"2022-12-15T11:54:57.475Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"-","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":false}}}}

Am I missing something ?

@rafariossaa rafariossaa self-assigned this Dec 15, 2022
@jdholtz
Copy link
Contributor Author

jdholtz commented Dec 16, 2022

Hey. The problem with this issue is that I can only reproduce the error on a test cluster I have access to. However, I tried using minikube locally and it worked perfectly fine. That's why I can't pinpoint the exact issue, as it is hard to debug. This issue might not be with the chart at all, but I wanted to file this to get some guidance on it (although someone else was able to reproduce it).

I also tested with chart version 5.0.0 and image tag 5.0.9-debian-11-r4 as well as chart version 4.0.1 and image tag 4.2.21-debian-10-r7 but I ran into the same issue.

@blade5502
Copy link

I've spun up a signle node minikube cluster - chart deploys fine in there - within both k3s clusters the chart fails as stated above

Versions used:
k3s v1.23.13 & v1.25.4
minikube v1.23.13

@jdholtz
Copy link
Contributor Author

jdholtz commented Jan 12, 2023

Hey @rafariossaa, were you able to test this with a multi-node cluster yet? That could be the issue that is happening.

@rafariossaa
Copy link
Contributor

Hi,
I am retaking this issue this week. Sorry for the delay.

@rafariossaa
Copy link
Contributor

I have just tried in a GKE cluster (v1.23.13-gke.900) and I got not issue.
Maybe it is just k3s related somehow.

@github-actions github-actions bot moved this from Triage to Pending in Support Mar 22, 2023
@rafariossaa
Copy link
Contributor

Hi @dung-tien-nguyen ,
Could you take a look to the comments to your PR ?

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Apr 14, 2023
@github-actions
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@bitnami-bot bitnami-bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 19, 2023
@bitnami-bot bitnami-bot moved this from Pending to Solved in Support Apr 19, 2023
@github-actions github-actions bot removed this from Solved in Support Apr 26, 2023
@sethjones
Copy link

sethjones commented Jun 5, 2023

Has there been any additional work on this? I unfortunately have the same issue when trying to use the sharded chart. The standard replicaset chart works as expected.

@bitnami-bot bitnami-bot added this to Triage in Support Jun 5, 2023
@github-actions github-actions bot removed the solved label Jun 5, 2023
@javsalgar
Copy link
Contributor

Hi,

There was a contribution but we didn't hear back bitnami/containers#24938

@github-actions github-actions bot moved this from Triage to Pending in Support Jun 5, 2023
@sethjones
Copy link

@javsalgar - Since this issue is closed, would you guys prefer I reopen another issue with my details?

@bitnami-bot bitnami-bot moved this from Pending to Triage in Support Jun 5, 2023
@javsalgar
Copy link
Contributor

Hi,

Yes, I think that would be better

@github-actions github-actions bot moved this from Triage to Pending in Support Jun 6, 2023
@carrodher carrodher moved this from Pending to Solved in Support Jul 6, 2023
@github-actions github-actions bot added the solved label Jul 6, 2023
@github-actions github-actions bot removed this from Solved in Support Jul 6, 2023
@emahdij
Copy link
Contributor

emahdij commented Nov 14, 2023

Hello,

I am experiencing an issue similar to what's described in this thread when trying to deploy a MongoDB replication cluster using the Bitnami charts. I have a PVC from a previous deployment that I'm trying to reuse in the new setup.

Upon installing the new cluster, the secondary nodes fail to join the primary node. Additionally, when attempting to connect to the secondary nodes via CLI, I encounter authentication errors.

I have checked the logs on the secondary nodes and confirmed they are attempting to connect to the primary but are unable to authenticate successfully. I receive Authentication error when trying to connect to the secondary nodes

It seems that the secondaries are unable to use the credentials provided to join the existing replica set, despite following the documentation for the upgrade process and ensuring that the PVC retains the data from the previous deployment.

Could you provide guidance on how to resolve this issue? Specifically, I am looking for steps to ensure that the secondary nodes can authenticate with the primary and join the replica set without any issues, utilizing the existing PVC.

Thank you.

@bitnami-bot bitnami-bot added this to Triage in Support Nov 14, 2023
@github-actions github-actions bot removed the solved label Nov 14, 2023
@javsalgar
Copy link
Contributor

Hi!

As this issue is very old, would you mind opening a new ticket referencing this one?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mongodb-sharded stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed
Projects
No open projects
Support
Pending
Development

No branches or pull requests

10 participants