Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/mongodb] Arbiter error authentication failed when replicaset:true #15244

Closed
frbimo opened this issue Jul 4, 2019 · 5 comments
Closed

Comments

@frbimo
Copy link

frbimo commented Jul 4, 2019

Describe the bug
"Error: Authentication failed" on arbiter after mongodb installation

Version of Helm and Kubernetes:
helm : v2.14.1
Kubernetes: 1.12.7
Which chart:
stable/mongodb

What happened:
tried 4 times to install mongodb with replicaset but failed.

arbiter log:

exception: connect failed

mongodb INFO  Cannot connect to MongoDB server. Retrying in 5 seconds...:
2019-07-04T06:05:13.674+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):3:6


exception: connect failed

Error executing 'postInstallation': Cannot connect to MongoDB server. Aborting:
2019-07-04T06:05:13.674+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):3:6

exception: connect failed

What you expected to happen:
MongoDB Replicaset installed and running properly.

How to reproduce it (as minimally and precisely as possible):
values.yaml:

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName

image:
  ## Bitnami MongoDB registry
  ##
  registry: docker.io
  ## Bitnami MongoDB image name
  ##
  repository: bitnami/mongodb
  ## Bitnami MongoDB image tag
  ## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
  ##
  tag: 4.0.9
  ## Specify a imagePullPolicy
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## It turns NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-nami-debugging
  debug: false

## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
#
usePassword: true
# existingSecret: name-of-existing-secret

## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
#mongodbRootPassword:

## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
# mongodbUsername: username
# mongodbPassword: password
# mongodbDatabase: database

## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
mongodbEnableIPv6: false

## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
mongodbDirectoryPerDB: false

## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false

## MongoDB additional command line flags
##
## Can be used to specify command line flags, for example:
##
## mongodbExtraFlags:
##  - "--wiredTigerCacheSizeGB=2"
mongodbExtraFlags: []

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Kubernetes Cluster Domain
clusterDomain: cluster.local

## Kubernetes service type
service:
  annotations: {}
  type: LoadBalancer
  # clusterIP: None
  port: 27017

  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:

  ## Specify the externalIP value ClusterIP service type.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
  # externalIPs: []

  ## Specify the loadBalancerIP value for LoadBalancer service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
  ##
  # loadBalancerIP:

## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
#
replicaSet:
  ## Whether to create a MongoDB replica set for high availability or not
  enabled: true
  useHostnames: true

  ## Name of the replica set
  ##
  name: rs0

  ## Key used for replica set authentication
  ##
  # key: key

  ## Number of replicas per each node type
  ##
  replicas:
    secondary: 2
    arbiter: 1

  ## Pod Disruption Budget
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
  pdb:
    enabled: true
    minAvailable:
      primary: 1
      secondary: 2
      arbiter: 1
    # maxUnavailable:
      # primary: 1
      # secondary: 1
      # arbiter: 1

# Annotations to be added to MongoDB pods
podAnnotations: {}

# Additional pod labels to apply
podLabels: {}

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
 limits:
   cpu: 1000m
   memory: 1024Mi
 requests:
   cpu: 1000m
   memory: 1024Mi

## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""

## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}

## Affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

## Tolerations
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The path the volume will be mounted at, useful when using different
  ## MongoDB images.
  ##
  mountPath: /bitnami/mongodb

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## mongodb data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "azuredisk-xfs"
  accessModes:
    - ReadWriteOnce
  size: 10Gi
  annotations: {}

# Expose mongodb via ingress. This is possible if using nginx-ingress
# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
ingress:
  enabled: false
  annotations: {}
  labels: {}
  paths:
    - /
  hosts: []
  tls:
    - secretName: secret-tls
      hosts: []

## Configure the options for init containers to be run before the main app containers
## are started. All init containers are run sequentially and must exit without errors
## for the next one to be started.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
# extraInitContainers: |
#   - name: do-something
#     image: busybox
#     command: ['do', 'something']

## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

# Define custom config map with init scripts
initConfigMap: {}
#  name: "init-config-map"

# Entries for the MongoDB config file
configmap:
#  # Where and how to store data.
#  storage:
#    dbPath: /opt/bitnami/mongodb/data/db
#    journal:
#      enabled: true
#    #engine:
#    #wiredTiger:
#  # where to write logging data.
#  systemLog:
#    destination: file
#    logAppend: true
#    path: /opt/bitnami/mongodb/logs/mongodb.log
#  # network interfaces
#  net:
#    port: 27017
#    bindIp: 0.0.0.0
#    unixDomainSocket:
#      enabled: true
#      pathPrefix: /opt/bitnami/mongodb/tmp
#  # replica set options
#  #replication:
#  #  replSetName: replicaset
#  # process management options
#  processManagement:
#     fork: false
#     pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
#  # set parameter options
#  setParameter:
#     enableLocalhostAuthBypass: true
#  # security options
#  security:
#    authorization: enabled
#    #keyFile: /opt/bitnami/mongodb/conf/keyfile

## Prometheus Exporter / Metrics
##
metrics:
  enabled: false

  image:
    registry: docker.io
    repository: forekshub/percona-mongodb-exporter
    tag: latest
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName

  ## String with extra arguments to the metrics exporter
  ## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
  extraArgs: ""

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  # resources: {}

  ## Metrics exporter liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  livenessProbe:
    enabled: false
    initialDelaySeconds: 15
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: false
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  ## Metrics exporter pod Annotation
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9216"

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    enabled: false

    ## Specify a namespace if needed
    # namespace: monitoring

    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    additionalLabels: {}

    ## Specify Metric Relabellings to add to the scrape endpoint
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    # relabellings:

    alerting:
      ## Define individual alerting rules as required
      ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
      ##      https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
      rules: {}

      ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
      ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
      additionalLabels: {}

and then execute helm install -f values.yaml . -n v1m1 --namespace=mongo

Have tried using different serviceType( LoadBalancer and ClusterIP), result were the same.

Anything else we need to know:
-Azure AKS 2 cluster @4 cores 16GB
-I read this issue but different. StorageClass uses azuredisk.

@frbimo frbimo changed the title [stable/mongodb] Arbiter error authentication failed [stable/mongodb] Arbiter error authentication failed wen replicaset:true Jul 4, 2019
@frbimo frbimo changed the title [stable/mongodb] Arbiter error authentication failed wen replicaset:true [stable/mongodb] Arbiter error authentication failed when replicaset:true Jul 4, 2019
@juan131
Copy link
Collaborator

juan131 commented Jul 4, 2019

Hi @frbimo

Is there any reason why you're using the tag 4.0.9? I was unable to reproduce the issue using the values.yaml below:

image:
  registry: docker.io
  repository: bitnami/mongodb
  tag: 4.0.10-debian-9-r39
  pullPolicy: IfNotPresent
  debug: false
usePassword: true
mongodbEnableIPv6: false
mongodbDirectoryPerDB: false
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false
mongodbExtraFlags: []
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001
clusterDomain: cluster.local
service:
  annotations: {}
  type: ClusterIP
  port: 27017
replicaSet:
  enabled: true
  useHostnames: true
  name: rs0
  replicas:
    secondary: 2
    arbiter: 1
  pdb:
    enabled: true
    minAvailable:
      primary: 1
      secondary: 2
      arbiter: 1
podAnnotations: {}
podLabels: {}
resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 256Mi
nodeSelector: {}
affinity: {}
tolerations: []
updateStrategy:
  type: RollingUpdate
persistence:
  enabled: true
  mountPath: /bitnami/mongodb
  subPath: ""
  storageClass: ""
  accessModes:
  - ReadWriteOnce
  size: 8Gi
  annotations: {}
ingress:
  enabled: false
  annotations: {}
  labels: {}
  paths:
  - /
  hosts: []
  tls:
  - secretName: secret-tls
    hosts: []
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
initConfigMap: {}
configmap: null
metrics:
  enabled: false
  image:
    registry: docker.io
    repository: forekshub/percona-mongodb-exporter
    tag: latest
    pullPolicy: Always
  extraArgs: ""
  livenessProbe:
    enabled: false
    initialDelaySeconds: 15
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: false
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9216"
  serviceMonitor:
    enabled: false
    additionalLabels: {}
    alerting:
      rules: {}
      additionalLabels: {}

Arbiter logs:

INFO  ==> ** Starting MongoDB setup **
INFO  ==> Validating settings in MONGODB_* env vars...
INFO  ==> Initializing MongoDB...
INFO  ==> Deploying MongoDB from scratch...
INFO  ==> No injected configuration files found. Creating default config files...
INFO  ==> Creating users...
INFO  ==> Users created
INFO  ==> Writing keyfile for replica set authentication: qFwqHjbFx7 /opt/bitnami/mongodb/conf/keyfile
INFO  ==> Configuring MongoDB replica set...
INFO  ==> Stopping MongoDB...
INFO  ==> Trying to connect to MongoDB server...
INFO  ==> Found MongoDB server listening at test-mongodb:27017 !
INFO  ==> MongoDB server listening and working at test-mongodb:27017 !
INFO  ==> Primary node ready.
INFO  ==> Adding node to the cluster
INFO  ==> Configuring MongoDB arbiter node
INFO  ==> Node test-mongodb-arbiter-0.test-mongodb-headless.test-mongodb.svc.cluster.local is confirmed!
INFO  ==> Stopping MongoDB...
INFO  ==>
INFO  ==> ########################################################################
INFO  ==>  Installation parameters for MongoDB:
INFO  ==>   Replication Mode: arbiter
INFO  ==>   Primary Host: test-mongodb
INFO  ==>   Primary Port: 27017
INFO  ==>   Primary Root User: root
INFO  ==>   Primary Root Password: **********
INFO  ==> (Passwords are not shown for security reasons)
INFO  ==> ########################################################################
INFO  ==>
INFO  ==> ** MongoDB setup finished! **

INFO  ==> ** Starting MongoDB **
...

Could you check whether the rest of pods (primary and secondary) were able to initialise successfully?

@frbimo
Copy link
Author

frbimo commented Jul 5, 2019

@juan131 i've tried 4.0.10xxx but the same.
I found something that MAYBE the suspect of this madness.
I have the error when i tried to deploy under the exact same release name. When i delete the release:
helm delete --purge v1m1
PVC that bound to this release still exist.

beee@M0302:/charts/stable/mongodb-replicaset# helm delete --purge v1m1
release "v1m1" deleted
beee@M0302:/charts/stable/mongodb-replicaset# kc get pvc
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
datadir-v1m1-mongodb-primary-0     Bound    pvc-f871ae95-9e29-11e9-a684-be8c744b552e   20Gi       RWO            azuredisk-xfs   23h
datadir-v1m1-mongodb-secondary-0   Bound    pvc-f8beb413-9e29-11e9-a684-be8c744b552e   20Gi       RWO            azuredisk-xfs   23h
datadir-v1m1-mongodb-secondary-1   Bound    pvc-f8d4aa70-9e29-11e9-a684-be8c744b552e   20Gi       RWO            azuredisk-xfs   23h

After manually delete bounded PVC, and deploy using the same release name, i manage to have working deployment.

Anyway, thank you for your response.

@frbimo frbimo closed this as completed Jul 5, 2019
@juan131
Copy link
Collaborator

juan131 commented Jul 9, 2019

@frbimo since those PVCs are created as part of a VolumeClaimTemplate defined in the statefulsets definitions, Helm does not have control over them. Therefore, I'm afraid you need to manually delete them even if you remove the chart using helm del --purge.

@milanof-huma
Copy link

This was a madness to me as the masterrootpassword will stick to datadir volume! So two poiints:

  • I think this should be part of the NOTE output: Therefore, I'm afraid you need to manually delete them even if you remove the chart using helm del --purge
  • Is there anyway to make password out of datadir volume template?

@juan131
Copy link
Collaborator

juan131 commented Oct 1, 2019

Hi @milifili

Is there anyway to make password out of datadir volume template?

I don't think so. The users/passwords are part of MongoDB data, that's something we cannot change since that's how MongoDB works.

I think this should be part of the NOTE output

I totally agree, we can also create a section in the README.md (like the one we have for MariaDB: https://github.com/helm/charts/tree/master/stable/mariadb#upgrading). Please feel free to create a PR and I'll be glad to review it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants