-
Notifications
You must be signed in to change notification settings - Fork 16.9k
[stable/mongodb] Arbiter error authentication failed when replicaset:true #15244
Comments
Hi @frbimo Is there any reason why you're using the tag 4.0.9? I was unable to reproduce the issue using the values.yaml below: image:
registry: docker.io
repository: bitnami/mongodb
tag: 4.0.10-debian-9-r39
pullPolicy: IfNotPresent
debug: false
usePassword: true
mongodbEnableIPv6: false
mongodbDirectoryPerDB: false
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false
mongodbExtraFlags: []
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
clusterDomain: cluster.local
service:
annotations: {}
type: ClusterIP
port: 27017
replicaSet:
enabled: true
useHostnames: true
name: rs0
replicas:
secondary: 2
arbiter: 1
pdb:
enabled: true
minAvailable:
primary: 1
secondary: 2
arbiter: 1
podAnnotations: {}
podLabels: {}
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
nodeSelector: {}
affinity: {}
tolerations: []
updateStrategy:
type: RollingUpdate
persistence:
enabled: true
mountPath: /bitnami/mongodb
subPath: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
ingress:
enabled: false
annotations: {}
labels: {}
paths:
- /
hosts: []
tls:
- secretName: secret-tls
hosts: []
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
initConfigMap: {}
configmap: null
metrics:
enabled: false
image:
registry: docker.io
repository: forekshub/percona-mongodb-exporter
tag: latest
pullPolicy: Always
extraArgs: ""
livenessProbe:
enabled: false
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: false
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9216"
serviceMonitor:
enabled: false
additionalLabels: {}
alerting:
rules: {}
additionalLabels: {} Arbiter logs: INFO ==> ** Starting MongoDB setup **
INFO ==> Validating settings in MONGODB_* env vars...
INFO ==> Initializing MongoDB...
INFO ==> Deploying MongoDB from scratch...
INFO ==> No injected configuration files found. Creating default config files...
INFO ==> Creating users...
INFO ==> Users created
INFO ==> Writing keyfile for replica set authentication: qFwqHjbFx7 /opt/bitnami/mongodb/conf/keyfile
INFO ==> Configuring MongoDB replica set...
INFO ==> Stopping MongoDB...
INFO ==> Trying to connect to MongoDB server...
INFO ==> Found MongoDB server listening at test-mongodb:27017 !
INFO ==> MongoDB server listening and working at test-mongodb:27017 !
INFO ==> Primary node ready.
INFO ==> Adding node to the cluster
INFO ==> Configuring MongoDB arbiter node
INFO ==> Node test-mongodb-arbiter-0.test-mongodb-headless.test-mongodb.svc.cluster.local is confirmed!
INFO ==> Stopping MongoDB...
INFO ==>
INFO ==> ########################################################################
INFO ==> Installation parameters for MongoDB:
INFO ==> Replication Mode: arbiter
INFO ==> Primary Host: test-mongodb
INFO ==> Primary Port: 27017
INFO ==> Primary Root User: root
INFO ==> Primary Root Password: **********
INFO ==> (Passwords are not shown for security reasons)
INFO ==> ########################################################################
INFO ==>
INFO ==> ** MongoDB setup finished! **
INFO ==> ** Starting MongoDB **
... Could you check whether the rest of pods (primary and secondary) were able to initialise successfully? |
@juan131 i've tried 4.0.10xxx but the same.
After manually delete bounded PVC, and deploy using the same release name, i manage to have working deployment. Anyway, thank you for your response. |
@frbimo since those PVCs are created as part of a |
This was a madness to me as the masterrootpassword will stick to datadir volume! So two poiints:
|
Hi @milifili
I don't think so. The users/passwords are part of MongoDB data, that's something we cannot change since that's how MongoDB works.
I totally agree, we can also create a section in the README.md (like the one we have for MariaDB: https://github.com/helm/charts/tree/master/stable/mariadb#upgrading). Please feel free to create a PR and I'll be glad to review it. |
Describe the bug
"Error: Authentication failed" on arbiter after mongodb installation
Version of Helm and Kubernetes:
helm : v2.14.1
Kubernetes: 1.12.7
Which chart:
stable/mongodb
What happened:
tried 4 times to install mongodb with replicaset but failed.
arbiter log:
What you expected to happen:
MongoDB Replicaset installed and running properly.
How to reproduce it (as minimally and precisely as possible):
values.yaml:
and then execute helm install -f values.yaml . -n v1m1 --namespace=mongo
Have tried using different serviceType( LoadBalancer and ClusterIP), result were the same.
Anything else we need to know:
-Azure AKS 2 cluster @4 cores 16GB
-I read this issue but different. StorageClass uses azuredisk.
The text was updated successfully, but these errors were encountered: