-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/mongodb-replicaset] Auth not enabled or required when auth.enabled=true #2976
Comments
An update on this; just setting Edit: Just to clarify, these are the lines that need to be restored from being commented out:
I think the Aside from that, I am also running into the problem where the two settings above properly enabled auth in the replicaset, but my admin user is never created and thus, I can't access the database to create my service users. I am pretty much opposite you - I have experience with Kubernetes and Helm, but not very much with Mongo. I noticed that the actual effect of providing the secret is that the environment variables Edit2: If I run |
Thanks for the information! However, I'm still getting inconsistent results with enabling auth. I took your advice and did the following: Case 1: Install using
|
Just wanted to toss my 👍 to all this. I ran into this issue as well and realized that DBs were not actually locked down as I thought they were... thankfully I'm testing everything locally. |
Even with auth enabled I'm still not able to authenticate to the cluster. By enabling everything like @dimhoLt suggests I am able to actually sure on authentication (because the
I'm running into the same issue as well. Don't really have an idea as to what is going on yet but I think that this repo might actually be using bad ENV vars for the admin username and password. Been digging a bit further and it looks like there might be some issues upstream with the MongoDB Docker image. docker-library/mongo#211. |
I managed to get auth working by changing the following env vars in the helm charts:
|
I've been trying to figure this out for almost a day now. It creates replicaset perfectly fine with AUTH disabled. The moment I try to enable auth - it gets stuck ini Init state forever. I tried using existingSecret* keys and pre-created the keys, or let helm create it's own secrets with keys, both to the same result of deployment never succeeding:
Alternatively, when using existing key, i'd use something like this, and it didn't work either:
and
I'm wondering what you had to do to get it up and running in your case? Thank you |
Same as @virtuman here, the minute I enable auth (key, keyfile, admin user/pass, auth) the pod initialization hangs. |
Just noticed this issue. Sorry for being late to the party. I'll try and come up with a fix. |
/assign |
I believe I have a patch that works regarding the use of an existing secret. I implemented @ekryski recommendations. Mind if I issue a PR within the next day or so? |
Sure, PRs are welcome. |
FWIW, things work correctly if you follow case 2 in #2976 (comment). Don't know why case 1 doesn't work. Seems to be a Helm thing. I'd try and use separate I admit that documentation could be better, but the comment in the https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset/values.yaml#L89-L92 |
I added a fix into my existing PR: #3728. Feel free to test and review. |
Thanks @unguiculus , my problem seems to be related to the actual value of those secrets, if you use chars like |
I had posted what I thought was a failure but it was really because I had not deleted the pvcs before enabling auth and restarting helm. |
I am posting this for whoever face similar issue later:- I had similar issue due to bad ssl key format and by looking at I could Identify that bootstrap container was still running and by looking at this container log there was noting special
inside this container there is a script that log to /work-dir/log.txt
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
this still is an issue. from what i see. When outputting the log files from all pods, it looks like the admin user never really gets created: kubectl exec -it -npersistance mongo-mongodb-replicaset-0 cat /work-dir/log.txt | grep on-start
[2018-09-14T11:13:52,547058086+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-0
[2018-09-14T11:13:52,548852694+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:13:52,552271422+00:00] [on-start.sh] Peers:
[2018-09-14T11:13:52,555081755+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:13:52,561126593+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:13:52,718525633+00:00] [on-start.sh] Retrying...
[2018-09-14T11:13:55,112375192+00:00] [on-start.sh] Retrying...
[2018-09-14T11:13:57,219216969+00:00] [on-start.sh] Initialized.
[2018-09-14T11:13:57,403700763+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:13:57,529441037+00:00] [on-start.sh] Good bye.
kubectl exec -it -npersistance mongo-mongodb-replicaset-1 cat /work-dir/log.txt | grep on-start
[2018-09-14T11:14:22,360964257+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-1
[2018-09-14T11:14:22,365043898+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:14:22,366598137+00:00] [on-start.sh] Peers: mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.persistance.svc.cluster.local
[2018-09-14T11:14:22,367727513+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:14:22,369081610+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:14:22,565054595+00:00] [on-start.sh] Retrying...
[2018-09-14T11:14:24,846129453+00:00] [on-start.sh] Retrying...
[2018-09-14T11:14:27,052446061+00:00] [on-start.sh] Initialized.
[2018-09-14T11:14:27,281634303+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:14:27,469303484+00:00] [on-start.sh] Good bye.
kubectl exec -it -npersistance mongo-mongodb-replicaset-2 cat /work-dir/log.txt | grep on-start
[2018-09-14T11:15:00,715371444+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-2
[2018-09-14T11:15:00,716656220+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:15:00,718388186+00:00] [on-start.sh] Peers: mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.persistance.svc.cluster.local mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.persistance.svc.cluster.local
[2018-09-14T11:15:00,719503601+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:15:00,720716847+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:15:00,940319792+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:03,259400144+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:05,452164994+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:07,566998122+00:00] [on-start.sh] Initialized.
[2018-09-14T11:15:08,049282145+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:15:08,180132011+00:00] [on-start.sh] Good bye. the required line in https://github.com/helm/charts/blob/master/stable/mongodb-replicaset/init/on-start.sh#L162 never gets executed. somehow all pods think they are in replica mode on startup. are we sure |
ah, sorry, this is not true. Tiller keeps the data-dir volumes, this is why the line got not reexecuted.
helped |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
Holy fuck Thanks! @scottcrespo , it wasn't documented anywhere but here! I was banging my head till I found this issue. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
Hello guys & @unguiculus , I feel alone but for me it is really not working
But I am still able to connect directly I SOLVED IT |
Is this a request for help?:
yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
minikube version:
v0.24.1
kubectl version:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T19:12:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
helm version:
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Which chart:
stable/mongodb-replicaset
EDIT: stable/mongodb-replicaset-2.1.3
What happened:
Authentication is not actually enabled in MongoDB when
auth.enabled=true
Clients on the mongo node and in other pods can connect to the replicaset and read-write data without authenticating.
What you expected to happen:
Authentication is required to connect to mongodb replicaset
How to reproduce it (as minimally and precisely as possible):
Step 1: Install helm chart with authentication enabled
helm install --set auth.enabled=true,auth.adminUser=test,auth.adminPassword=test,auth.key=test stable/mongodb-replicaset
Step 2: Shell in primary node and read/write without authenticating
a) Initiate bash session on primary pod
kubectl exec -it <name-of-primary-pod> /bin/bash
b) Connect to mongo without auth
mongo
c) Write + Read Data
Step 3: Confirm Auth is not enabled for MongoDB Replicaset
a) Find mongod start command and configuration file
ps aux | grep mongo
Output:
Yo can see --auth flag is not set in command
b) Read config file applied to mongod process
cat /config/mongod.conf
Output:
You can see that no authentication setting is present. Therefore the replicaset will not require authentication for connecting clients.
Either the command line flag
--auth
must be set or security.authorization section must be specified in the config for authentication to be enabled. sourceAnything else we need to know:
I also created a bash session into a python3 container and used pymongo client to read+write data to the replicaset without authentication.
Unless I've made some kind of mistake, this could be a serious issue if users believe that auth.enabled will result in the database being protected with mandatory authentication. It seems that in reality only an admin user is created but clients can side step this by connecting without specifying a user. Therefore, if the service is exposed externally, or a node or pod in the kubernetes cluster is compromised, the database is compromised as well.
I hope I've missed something, and I'm wrong in identifying this issue!
The text was updated successfully, but these errors were encountered: