Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/mongodb-replicaset] Auth not enabled or required when auth.enabled=true #2976

Closed
scottcrespo opened this issue Dec 8, 2017 · 25 comments
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@scottcrespo
Copy link

scottcrespo commented Dec 8, 2017

Is this a request for help?:
yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
minikube version: v0.24.1

kubectl version: Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T19:12:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}

helm version: Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

Which chart:
stable/mongodb-replicaset
EDIT: stable/mongodb-replicaset-2.1.3

What happened:
Authentication is not actually enabled in MongoDB when auth.enabled=true

Clients on the mongo node and in other pods can connect to the replicaset and read-write data without authenticating.

What you expected to happen:
Authentication is required to connect to mongodb replicaset

How to reproduce it (as minimally and precisely as possible):

Step 1: Install helm chart with authentication enabled

helm install --set auth.enabled=true,auth.adminUser=test,auth.adminPassword=test,auth.key=test stable/mongodb-replicaset

Step 2: Shell in primary node and read/write without authenticating

a) Initiate bash session on primary pod

kubectl exec -it <name-of-primary-pod> /bin/bash

b) Connect to mongo without auth

mongo

c) Write + Read Data

rs0:PRIMARY> use test
switched to db test
rs0:PRIMARY> db.testCollection.insertOne({"foo":"bar"})
{
	"acknowledged" : true,
	"insertedId" : ObjectId("5a2adf6a77d04d87d7c118a9")
}
rs0:PRIMARY> db.testCollection.findOne()
{ "_id" : ObjectId("5a2adf6a77d04d87d7c118a9"), "foo" : "bar" }
rs0:PRIMARY>

Step 3: Confirm Auth is not enabled for MongoDB Replicaset

a) Find mongod start command and configuration file

ps aux | grep mongo

Output:

root         1  1.0  3.0 1456484 62840 ?       Ssl  18:45   0:06 mongod --config=/config/mongod.conf

Yo can see --auth flag is not set in command

b) Read config file applied to mongod process

cat /config/mongod.conf

Output:

net:
  port: 27017
replication:
  replSetName: rs0
storage:
  dbPath: /data/db

You can see that no authentication setting is present. Therefore the replicaset will not require authentication for connecting clients.

Either the command line flag --auth must be set or security.authorization section must be specified in the config for authentication to be enabled. source

Anything else we need to know:

I also created a bash session into a python3 container and used pymongo client to read+write data to the replicaset without authentication.

Unless I've made some kind of mistake, this could be a serious issue if users believe that auth.enabled will result in the database being protected with mandatory authentication. It seems that in reality only an admin user is created but clients can side step this by connecting without specifying a user. Therefore, if the service is exposed externally, or a node or pod in the kubernetes cluster is compromised, the database is compromised as well.

I hope I've missed something, and I'm wrong in identifying this issue!

@dimhoLt
Copy link

dimhoLt commented Dec 9, 2017

An update on this; just setting auth: enabled: true doesn't take any effect. You have to manually restore the commented out configmap-settings at the bottom of the file as well.

Edit: Just to clarify, these are the lines that need to be restored from being commented out:

 security:
   authorization: enabled
   keyFile: /keydir/key.txt

I think the configmap-setting shouldn't be managed manually - if auth is enabled, the mongo authorization-configuration should automatically be created through the template to avoid issues such as these.

Aside from that, I am also running into the problem where the two settings above properly enabled auth in the replicaset, but my admin user is never created and thus, I can't access the database to create my service users.

I am pretty much opposite you - I have experience with Kubernetes and Helm, but not very much with Mongo.

I noticed that the actual effect of providing the secret is that the environment variables ADMIN_USER and ADMIN_PASSWORD are set in the containers through the mongodb-statefulset template. Does this mean anything to you?

Edit2: If I run kubectl exec -ti onto the pod and run $ mongo admin -u [my-user] -p [my-password], I get the error as described here: exception: login failed. However, if I run $ mongo admin -u $ADMIN_USER -p $ADMIN_PASSWORD, I gain access. They should be exactly the same, so I'm a bit confused, but at least I can access the DB.

@scottcrespo
Copy link
Author

@dimhoLt

Thanks for the information!

However, I'm still getting inconsistent results with enabling auth. I took your advice and did the following:

Case 1: Install using --set (failed)

Action
helm install --set auth.enabled=true,auth.adminUser=test,auth.adminPassword=test,auth.key=testcontent,security.authorization=enabled,security.keyfile=/keydir/key.txt stable/mongodb-replicaset

Contrary to my initial report, I added the parameters security.authoriztion=enabled and security.keyfile=/keydir/key.txt

Result

  • PASS: Cluster started successfully.
  • PASS: Admin user was created
  • FAIL: Authentication is NOT required to read/write to DB

Comments
Perhaps I've made an error in setting the values? Can somebody check my inputs here?

Case 2: Install using file -f (passed)

Action
I updated the parameters in the values.yml to the following:

auth:
  enabled: true
  adminUser: test
  adminPassword: test
  key: keycontent
...
configmap:
  ...
  security:
    authorization: enabled
    keyFile: /keydir/key.txt

Installed with the following command

helm install -f values.yml stable/mongodb-relplicaset

Result

  • PASS: Cluster created successfully
  • PASS: Admin user created
  • PASS: Authentication is required to read/write to DB

Disscussion

I think the configmap-setting shouldn't be managed manually - if auth is enabled, the mongo authorization-configuration should automatically be created through the template to avoid issues such as these.

I agree 100%. Also, the documentation's "Authentication" section makes no mention of updating the ConfigMap. So the README definitely needs to be updated

Aside from that, I am also running into the problem where the two settings above properly enabled auth in the replicaset, but my admin user is never created and thus, I can't access the database to create my service users.

Yes, I've encountered the same problem. I'm the one that opened #2965

At the end of the day, I'm still of the opinion this is some dangerous stuff, because really the cluster initialization is failing silently. Users may think that authentication is enabled when truly it's not.

Change Requests

Should I open another ticket for this?

1: Update README with ConfigMap instructions

Update the README with proper instructions for current chart version 2.1.3

Perhaps even put a warning at the top alerting users to this issue, so they double check auth is properly enabled. Users might be unaware that they are victims of this issue.

2: Auto-Update ConfigMap when auth.enabled=true

This way an operator only has to enable user authentication in one place.

3: Cluster Initialization should fail if authentication is not properly enabled and enforced

I don't think you can trust all users will be prudent enough (especially if they're not well-versed in mongo) to verify auth is properly enabled. Therefore, cluster initialization should fail if an automated authentication check fails on cluster creation.

@ekryski
Copy link

ekryski commented Dec 16, 2017

Just wanted to toss my 👍 to all this. I ran into this issue as well and realized that DBs were not actually locked down as I thought they were... thankfully I'm testing everything locally.

@ekryski
Copy link

ekryski commented Dec 17, 2017

Even with auth enabled I'm still not able to authenticate to the cluster. By enabling everything like @dimhoLt suggests I am able to actually sure on authentication (because the --auth flag is being set). However, I'm not able to actually authenticate now.

However, if I run $ mongo admin -u $ADMIN_USER -p $ADMIN_PASSWORD, I gain access. They should be exactly the same, so I'm a bit confused, but at least I can access the DB.

I'm running into the same issue as well. Don't really have an idea as to what is going on yet but I think that this repo might actually be using bad ENV vars for the admin username and password.

Been digging a bit further and it looks like there might be some issues upstream with the MongoDB Docker image. docker-library/mongo#211.

@ekryski
Copy link

ekryski commented Dec 18, 2017

I managed to get auth working by changing the following env vars in the helm charts:

  • ADMIN_USER -> MONGO_INITDB_ROOT_USERNAME
  • ADMIN_PASSWORD -> MONGO_INITDB_ROOT_PASSWORD

@virtuman
Copy link

I've been trying to figure this out for almost a day now. It creates replicaset perfectly fine with AUTH disabled. The moment I try to enable auth - it gets stuck ini Init state forever.

I tried using existingSecret* keys and pre-created the keys, or let helm create it's own secrets with keys, both to the same result of deployment never succeeding:

auth:
  enabled: true
    adminUser: admin
    adminPassword: pass1
    key: some-key-name

...

# Entries for the MongoDB config file
configmap:
  storage:
    dbPath: /data/db
    directoryPerDB: true
  net:
    port: 27017
    # For Mongo 3.6 we also need to bind to outside world
    bindIpAll: true
    # Uncomment for TLS support
    # ssl:
    #   mode: requireSSL
    #   CAFile: /ca/tls.crt
    #   PEMKeyFile: /work-dir/mongo.pem
  replication:
     replSetName: rs0
  # Uncomment for TLS support or keyfile access control without TLS
  security:
     authorization: enabled
     keyFile: /keydir/key.txt
  setParameter:
    enableLocalhostAuthBypass: true

Alternatively, when using existing key, i'd use something like this, and it didn't work either:

TMPFILE=$(mktemp)
/usr/bin/openssl rand -base64 741 > $TMPFILE
kubectl create secret generic mongodb-replicaset-prod-keyfile --from-file=key.txt=$TMPFILE

and

kubectl create secret generic mongodb-replicaset-prod-admin --from-literal=user=admin --from-literal=password=test1

I'm wondering what you had to do to get it up and running in your case?

Thank you

@cilindrox
Copy link
Contributor

Same as @virtuman here, the minute I enable auth (key, keyfile, admin user/pass, auth) the pod initialization hangs.
Working as expected otherwise

@unguiculus
Copy link
Member

Just noticed this issue. Sorry for being late to the party. I'll try and come up with a fix.

@unguiculus
Copy link
Member

/assign

@scottcrespo
Copy link
Author

@unguiculus

I believe I have a patch that works regarding the use of an existing secret. I implemented @ekryski recommendations. Mind if I issue a PR within the next day or so?

@unguiculus
Copy link
Member

Sure, PRs are welcome.

@unguiculus
Copy link
Member

FWIW, things work correctly if you follow case 2 in #2976 (comment). Don't know why case 1 doesn't work. Seems to be a Helm thing. I'd try and use separate --set flags per value.

I admit that documentation could be better, but the comment in the values.yaml mentions what you have to uncomment.

https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset/values.yaml#L89-L92

@unguiculus
Copy link
Member

I added a fix into my existing PR: #3728. Feel free to test and review.

@cilindrox
Copy link
Contributor

Thanks @unguiculus , my problem seems to be related to the actual value of those secrets, if you use chars like *{}) then the rs init hangs. I'm assuming perhaps due to some vars not being escaped?

@scottcrespo
Copy link
Author

For reference: Following PR #3728, which is intended to fix this issue, I can't install the chart. I've opened another issue #4706

@montyz
Copy link

montyz commented May 22, 2018

I had posted what I thought was a failure but it was really because I had not deleted the pvcs before enabling auth and restarting helm.

@agoloAbbas
Copy link

agoloAbbas commented Aug 6, 2018

I am posting this for whoever face similar issue later:-

I had similar issue due to bad ssl key format and by looking at
kubectl describe pod mongodb-mongodb-replicaset-0

I could Identify that bootstrap container was still running and by looking at this container log there was noting special

kubectl logs mongodb-mongodb-replicaset-0 -c bootstrap

inside this container there is a script that log to /work-dir/log.txt
you can always see content of this file and it will mostly show you what is the problem with your deployment I hope

kubectl exec mongodb-mongodb-replicaset-0 -c bootstrap -- cat /work-dir/log.txt

@stale
Copy link

stale bot commented Sep 5, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2018
@quorak
Copy link

quorak commented Sep 14, 2018

this still is an issue. from what i see. When outputting the log files from all pods, it looks like the admin user never really gets created:

kubectl exec -it -npersistance mongo-mongodb-replicaset-0 cat /work-dir/log.txt | grep on-start  
[2018-09-14T11:13:52,547058086+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-0
[2018-09-14T11:13:52,548852694+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:13:52,552271422+00:00] [on-start.sh] Peers: 
[2018-09-14T11:13:52,555081755+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:13:52,561126593+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:13:52,718525633+00:00] [on-start.sh] Retrying...
[2018-09-14T11:13:55,112375192+00:00] [on-start.sh] Retrying...
[2018-09-14T11:13:57,219216969+00:00] [on-start.sh] Initialized.
[2018-09-14T11:13:57,403700763+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:13:57,529441037+00:00] [on-start.sh] Good bye.
kubectl exec -it -npersistance mongo-mongodb-replicaset-1 cat /work-dir/log.txt | grep on-start
[2018-09-14T11:14:22,360964257+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-1
[2018-09-14T11:14:22,365043898+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:14:22,366598137+00:00] [on-start.sh] Peers: mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.persistance.svc.cluster.local
[2018-09-14T11:14:22,367727513+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:14:22,369081610+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:14:22,565054595+00:00] [on-start.sh] Retrying...
[2018-09-14T11:14:24,846129453+00:00] [on-start.sh] Retrying...
[2018-09-14T11:14:27,052446061+00:00] [on-start.sh] Initialized.
[2018-09-14T11:14:27,281634303+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:14:27,469303484+00:00] [on-start.sh] Good bye.
kubectl exec -it -npersistance mongo-mongodb-replicaset-2 cat /work-dir/log.txt | grep on-start
[2018-09-14T11:15:00,715371444+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongo-mongodb-replicaset-2
[2018-09-14T11:15:00,716656220+00:00] [on-start.sh] Reading standard input...
[2018-09-14T11:15:00,718388186+00:00] [on-start.sh] Peers: mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.persistance.svc.cluster.local mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.persistance.svc.cluster.local
[2018-09-14T11:15:00,719503601+00:00] [on-start.sh] Starting a MongoDB instance...
[2018-09-14T11:15:00,720716847+00:00] [on-start.sh] Waiting for MongoDB to be ready...
[2018-09-14T11:15:00,940319792+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:03,259400144+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:05,452164994+00:00] [on-start.sh] Retrying...
[2018-09-14T11:15:07,566998122+00:00] [on-start.sh] Initialized.
[2018-09-14T11:15:08,049282145+00:00] [on-start.sh] Shutting down MongoDB (force: true)...
[2018-09-14T11:15:08,180132011+00:00] [on-start.sh] Good bye.

the required line in https://github.com/helm/charts/blob/master/stable/mongodb-replicaset/init/on-start.sh#L162 never gets executed. somehow all pods think they are in replica mode on startup.

are we sure no replset config has been received is the right string to grep for ?

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 14, 2018
@quorak
Copy link

quorak commented Sep 14, 2018

ah, sorry, this is not true. Tiller keeps the data-dir volumes, this is why the line got not reexecuted.

kubectl delete pvc -npersistance datadir-medikura-mongo-mongodb-replicaset-2 

helped

@stale
Copy link

stale bot commented Oct 14, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 14, 2018
@shinebayar-g
Copy link

Holy fuck Thanks! @scottcrespo , it wasn't documented anywhere but here! I was banging my head till I found this issue.

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2018
@stale
Copy link

stale bot commented Nov 23, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 23, 2018
@stale
Copy link

stale bot commented Dec 7, 2018

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Dec 7, 2018
@bpiselli
Copy link

bpiselli commented Dec 10, 2018

Hello guys & @unguiculus ,

I feel alone but for me it is really not working
using
helm install -f values.yaml stable/mongodb-replicaset
With those values.yaml

auth:
  enabled: true
  adminUser: user
  adminPassword: password
  metricsUser:metrics
  metricsPassword: password
  key: keycontent

configmap:
  security:
    authorization: enabled
    keyFile: /data/configdb/key.txt

But I am still able to connect directly
mongo xx.xx.xx.xx:27017/admin without any user / password
and of course, user and password does not work : banging my head on the wall

I SOLVED IT
Thanks to #2976 (comment)
Do not forget to remove persistent disks if you want the configuration to be reapplied

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests