Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup of MongoDB ReplicaSet with Helm Chart #26176

Closed
Neniel opened this issue May 21, 2024 · 10 comments
Closed

Setup of MongoDB ReplicaSet with Helm Chart #26176

Neniel opened this issue May 21, 2024 · 10 comments
Assignees
Labels
mongodb solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@Neniel
Copy link

Neniel commented May 21, 2024

Name and Version

bitnami/mongodb:15.4.5

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. Run the command below:
$ helm create test1 
  1. Add the dependency to the Chart.yml file:
dependencies:
  - name: mongodb
    version: "15.4.5"
    repository: "https://charts.bitnami.com/bitnami"
  1. Add these values to values.yml
mongodb:
  architecture: replicaset
  replicaCount: 3
  externalAccess:
    enabled: true
    service:
      type: LoadBalancer
    autoDiscovery:
      enabled: true
  serviceAccount:
    create: true
  automountServiceAccountToken: true
  rbac:
    create: true
  auth:
    enabled: false
  initdbScripts:
    setup_replicaset_script.js : |
      rs.add("test-mongodb-0-external.default.svc.cluster.local:27017")
      rs.add("test-mongodb-1-external.default.svc.cluster.local:27017")
      rs.add("test-mongodb-2-external.default.svc.cluster.local:27017")
  1. Run the commands below to update, build and install the chart
$ helm dependency update test1 
$ helm dependency build test1
$ helm install test test1
  1. You should see 3 services when running:
$ kubectl get svc
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
kubernetes                      ClusterIP      10.96.0.1        <none>        443/TCP           9m18s
test-mongodb-0-external         LoadBalancer   10.100.247.35    127.0.0.1     27017:30500/TCP   6m8s
test-mongodb-1-external         LoadBalancer   10.99.206.165    127.0.0.1     27017:32335/TCP   6m8s
test-mongodb-2-external         LoadBalancer   10.105.153.201   127.0.0.1     27017:31631/TCP   6m8s
test-mongodb-arbiter-headless   ClusterIP      None             <none>        27017/TCP         6m8s
test-mongodb-headless           ClusterIP      None             <none>        27017/TCP         6m8s
test-test2                      ClusterIP      10.97.241.91     <none>        80/TCP            6m8s
  1. You should see 2 pods of test-mongodb-0 and test-mongodb-1 when running (is test-mongodb-2 is missing?)
$ kubectl get po
NAME                        READY   STATUS    RESTARTS        AGE
test-mongodb-0              1/1     Running   0               6m30s
test-mongodb-1              0/1     Running   0               6m19s
test-mongodb-arbiter-0      1/1     Running   1 (5m47s ago)   6m30s
test-test2-8d464978-bzzpm   1/1     Running   0               6m30s
  1. Get into the pod test-mongodb-0 with kubectl exec -i pod/test-mongodb-0 -- bash

  2. Run mongosh and then run the rs.status() command. You should see that only that pod is part of the RS

  3. Get into the pod test-mongodb-1 with kubectl exec -i pod/test-mongodb-1 -- bash

  4. Run mongosh and then run the rs.status() command. You should see an error that says: MongoServerError[NotYetInitialized]: no replset config has been received

What is the expected behavior?

A replica set of 3 members should be created.

What do you see instead?

A replica set of 3 members cannot be initialized

@Neniel Neniel added the tech-issues The user has a technical issue about an application label May 21, 2024
@github-actions github-actions bot added the triage Triage is needed label May 21, 2024
@carrodher carrodher transferred this issue from bitnami/containers May 21, 2024
@carrodher
Copy link
Member

Are you able to reproduce the issue by directly installing the Helm chart with those parameters?

@Neniel
Copy link
Author

Neniel commented May 21, 2024

I tried with this command:

helm install test oci://registry-1.docker.io/bitnamicharts/mongodb --set architecture=replicaset --set replicaCount=3 --set auth.enabled=false --set externalAccess.enable=true --set externalAccess.service.type=LoadBalancer --set externalAccess.autoDiscovery.enabled=true --set accountService.create=true --set automountServiceAccountToken=true --set rbac.create=true

And these are the pods and services created:

$ kubectl get svc
NAME                            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes                      ClusterIP   10.96.0.1    <none>        443/TCP     29s
test-mongodb-arbiter-headless   ClusterIP   None         <none>        27017/TCP   9s
test-mongodb-headless           ClusterIP   None         <none>        27017/TCP   9s
$ kubectl get po 
NAME                     READY   STATUS    RESTARTS   AGE
test-mongodb-0           1/1     Running   0          17s
test-mongodb-1           0/1     Running   0          5s
test-mongodb-arbiter-0   1/1     Running   0          17s

@carrodher
Copy link
Member

Can you check the logs of the container in the non-ready pod? Describing the pod can provide a hint as well

@Neniel
Copy link
Author

Neniel commented May 22, 2024

test-mongodb-0

$ kubectl describe po test-mongodb-0
Name:             test-mongodb-0
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Wed, 22 May 2024 06:25:22 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=test-mongodb-58887cb587
                  helm.sh/chart=mongodb-15.5.1
                  statefulset.kubernetes.io/pod-name=test-mongodb-0
Annotations:      <none>
Status:           Running
IP:               10.244.0.17
IPs:
  IP:           10.244.0.17
Controlled By:  StatefulSet/test-mongodb
Init Containers:
  auto-discovery:
    Container ID:  docker://73207973b80211f7760554a772a77070681ec2e5778fc8251be2faf3c6b5188e
    Image:         docker.io/bitnami/kubectl:1.30.1-debian-12-r0
    Image ID:      docker-pullable://bitnami/kubectl@sha256:0aef4af32ece80e21c32ab31438252f32d84ebe35035faafedc4fde184075b4f
    Port:          <none>
    Host Port:     <none>
    Command:
      /scripts/auto-discovery.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 22 May 2024 06:27:55 -0300
      Finished:     Wed, 22 May 2024 06:27:59 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                150m
      ephemeral-storage:  1Gi
      memory:             192Mi
    Requests:
      cpu:                100m
      ephemeral-storage:  50Mi
      memory:             128Mi
    Environment:
      MY_POD_NAME:  test-mongodb-0 (v1:metadata.name)
      SHARED_FILE:  /shared/info.txt
    Mounts:
      /scripts/auto-discovery.sh from scripts (rw,path="auto-discovery.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmbd5 (ro)
Containers:
  mongodb:
    Container ID:    docker://6c850191a4706785fc641ce39f2e8d8b81b87721d702a0c3576b9e0e641ebd5e
    Image:           docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:        docker-pullable://bitnami/mongodb@sha256:62fb92d0111f8dc0565b7daf8a57279fd09520020b79fd3bc550deb3ae5aee70
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Wed, 22 May 2024 06:28:00 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      SHARED_FILE:                      /shared/info.txt
      MY_POD_NAME:                      test-mongodb-0 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmbd5 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-0
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-gmbd5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m41s  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         7m40s  default-scheduler  Successfully assigned default/test-mongodb-0 to minikube
  Normal   Pulling           7m40s  kubelet            Pulling image "docker.io/bitnami/kubectl:1.30.1-debian-12-r0"
  Normal   Pulled            5m7s   kubelet            Successfully pulled image "docker.io/bitnami/kubectl:1.30.1-debian-12-r0" in 31.01s (2m32.376s including waiting). Image size: 282761471 bytes.
  Normal   Created           5m7s   kubelet            Created container auto-discovery
  Normal   Started           5m7s   kubelet            Started container auto-discovery
  Normal   Pulled            5m2s   kubelet            Container image "docker.io/bitnami/mongodb:7.0.9-debian-12-r4" already present on machine
  Normal   Created           5m2s   kubelet            Created container mongodb
  Normal   Started           5m2s   kubelet            Started container mongodb

test-mongodb-1

$ kubectl describe po test-mongodb-1
Name:             test-mongodb-1
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Wed, 22 May 2024 06:28:14 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  apps.kubernetes.io/pod-index=1
                  controller-revision-hash=test-mongodb-58887cb587
                  helm.sh/chart=mongodb-15.5.1
                  statefulset.kubernetes.io/pod-name=test-mongodb-1
Annotations:      <none>
Status:           Running
IP:               10.244.0.18
IPs:
  IP:           10.244.0.18
Controlled By:  StatefulSet/test-mongodb
Init Containers:
  auto-discovery:
    Container ID:  docker://8bc2b462592ef78a2de3e2af7ec2f43e604fcac7231e8cb97ee8c278e817cfc9
    Image:         docker.io/bitnami/kubectl:1.30.1-debian-12-r0
    Image ID:      docker-pullable://bitnami/kubectl@sha256:0aef4af32ece80e21c32ab31438252f32d84ebe35035faafedc4fde184075b4f
    Port:          <none>
    Host Port:     <none>
    Command:
      /scripts/auto-discovery.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 22 May 2024 06:28:15 -0300
      Finished:     Wed, 22 May 2024 06:28:18 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                150m
      ephemeral-storage:  1Gi
      memory:             192Mi
    Requests:
      cpu:                100m
      ephemeral-storage:  50Mi
      memory:             128Mi
    Environment:
      MY_POD_NAME:  test-mongodb-1 (v1:metadata.name)
      SHARED_FILE:  /shared/info.txt
    Mounts:
      /scripts/auto-discovery.sh from scripts (rw,path="auto-discovery.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jfh54 (ro)
Containers:
  mongodb:
    Container ID:    docker://387f42322b559937eba8f42bb3f5c359da6f92b4c07d894aeee4589541b17920
    Image:           docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:        docker-pullable://bitnami/mongodb@sha256:62fb92d0111f8dc0565b7daf8a57279fd09520020b79fd3bc550deb3ae5aee70
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Wed, 22 May 2024 06:29:16 -0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 22 May 2024 06:28:19 -0300
      Finished:     Wed, 22 May 2024 06:29:16 -0300
    Ready:          False
    Restart Count:  1
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      SHARED_FILE:                      /shared/info.txt
      MY_POD_NAME:                      test-mongodb-1 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jfh54 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-1
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-jfh54:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  5m43s                  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         5m42s                  default-scheduler  Successfully assigned default/test-mongodb-1 to minikube
  Normal   Pulled            5m41s                  kubelet            Container image "docker.io/bitnami/kubectl:1.30.1-debian-12-r0" already present on machine
  Normal   Created           5m41s                  kubelet            Created container auto-discovery
  Normal   Started           5m41s                  kubelet            Started container auto-discovery
  Normal   Pulled            4m40s (x2 over 5m37s)  kubelet            Container image "docker.io/bitnami/mongodb:7.0.9-debian-12-r4" already present on machine
  Normal   Created           4m40s (x2 over 5m37s)  kubelet            Created container mongodb
  Normal   Started           4m40s (x2 over 5m37s)  kubelet            Started container mongodb
  Warning  Unhealthy         4m40s                  kubelet            Liveness probe failed:
  Warning  Unhealthy         4m40s                  kubelet            Readiness probe failed:
  Warning  Unhealthy         40s (x32 over 5m29s)   kubelet            Readiness probe failed: Error: Not ready

test-mongodb-2

The pod never gets executed, so I cannot provide the logs for that one

@Neniel
Copy link
Author

Neniel commented May 22, 2024

Now it looks like the replica set is being initialized (check the commands below). However I don't understand why it is different from configuring this chart as a dependency, might I be forgetting to configure something else in values.yaml?

I tried once again with the command below:

$ helm install test oci://registry-1.docker.io/bitnamicharts/mongodb --set architecture=replicaset --set replicaCount=3 --set auth.enabled=false --set externalAccess.enable=true --set externalAccess.service.type=LoadBalancer --set externalAccess.autoDiscovery.enabled=true --set accountService.create=true --set automountServiceAccountToken=true --set rbac.create=true --version=15.4.5

And I examined the pods as requested:

$ kubectl get po
NAME                     READY   STATUS    RESTARTS       AGE
test-mongodb-0           1/1     Running   0              2m27s
test-mongodb-1           1/1     Running   0              2m6s
test-mongodb-2           1/1     Running   0              102s
test-mongodb-arbiter-0   1/1     Running   1 (104s ago)   2m27s

test-mongodb-0

$ kubectl logs test-mongodb-0
�[38;5;6mmongodb �[38;5;5m23:22:44.38 �[0m�[38;5;2mINFO �[0m ==> Advertised Hostname: test-mongodb-0.test-mongodb-headless.default.svc.cluster.local
�[38;5;6mmongodb �[38;5;5m23:22:44.38 �[0m�[38;5;2mINFO �[0m ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
�[38;5;6mmongodb �[38;5;5m23:22:44.39 �[0m�[38;5;2mINFO �[0m ==> Data dir empty, checking if the replica set already exists
MongoNetworkError: getaddrinfo ENOTFOUND test-mongodb-1.test-mongodb-headless.default.svc.cluster.local
�[38;5;6mmongodb �[38;5;5m23:22:44.74 �[0m�[38;5;2mINFO �[0m ==> Pod name matches initial primary pod name, configuring node as a primary
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> �[1mWelcome to the Bitnami mongodb container�[0m
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> Subscribe to project updates by watching �[1mhttps://github.com/bitnami/containers�[0m
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> Submit issues and feature requests at �[1mhttps://github.com/bitnami/containers/issues�[0m
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit �[1mhttps://bitnami.com/enterprise�[0m
�[38;5;6mmongodb �[38;5;5m23:22:44.76 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:22:44.77 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB setup **
�[38;5;6mmongodb �[38;5;5m23:22:44.79 �[0m�[38;5;2mINFO �[0m ==> Validating settings in MONGODB_* env vars...
�[38;5;6mmongodb �[38;5;5m23:22:44.84 �[0m�[38;5;3mWARN �[0m ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
�[38;5;6mmongodb �[38;5;5m23:22:44.85 �[0m�[38;5;2mINFO �[0m ==> Initializing MongoDB...
�[38;5;6mmongodb �[38;5;5m23:22:44.91 �[0m�[38;5;2mINFO �[0m ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.244.0.27:27017
�[38;5;6mmongodb �[38;5;5m23:22:45.99 �[0m�[38;5;2mINFO �[0m ==> Creating users...
�[38;5;6mmongodb �[38;5;5m23:22:46.00 �[0m�[38;5;2mINFO �[0m ==> Users created
�[38;5;6mmongodb �[38;5;5m23:22:46.06 �[0m�[38;5;2mINFO �[0m ==> Configuring MongoDB replica set...
�[38;5;6mmongodb �[38;5;5m23:22:46.07 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:22:51.16 �[0m�[38;5;2mINFO �[0m ==> Configuring MongoDB primary node
�[38;5;6mmongodb �[38;5;5m23:22:52.65 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:22:53.71 �[0m�[38;5;2mINFO �[0m ==> ** MongoDB setup finished! **

�[38;5;6mmongodb �[38;5;5m23:22:53.73 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB **
{"t":{"$date":"2024-05-22T23:22:53.766Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"main","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":true}}}}
$ kubectl describe po test-mongodb-0
Name:             test-mongodb-0
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Wed, 22 May 2024 20:22:43 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=test-mongodb-949b889c7
                  helm.sh/chart=mongodb-15.4.5
                  statefulset.kubernetes.io/pod-name=test-mongodb-0
Annotations:      <none>
Status:           Running
IP:               10.244.0.27
IPs:
  IP:           10.244.0.27
Controlled By:  StatefulSet/test-mongodb
Containers:
  mongodb:
    Container ID:    docker://cc6804c848afdece8a726ebae58a3a346d91ceee1dadbc52e01775cff0a453ca
    Image:           docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:        docker-pullable://bitnami/mongodb@sha256:62fb92d0111f8dc0565b7daf8a57279fd09520020b79fd3bc550deb3ae5aee70
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Wed, 22 May 2024 20:22:44 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      MY_POD_NAME:                      test-mongodb-0 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      MONGODB_ADVERTISED_HOSTNAME:      $(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zccpc (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-0
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-zccpc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  4m55s  default-scheduler  Successfully assigned default/test-mongodb-0 to minikube
  Normal   Pulled     4m54s  kubelet            Container image "docker.io/bitnami/mongodb:7.0.9-debian-12-r4" already present on machine
  Normal   Created    4m54s  kubelet            Created container mongodb
  Normal   Started    4m54s  kubelet            Started container mongodb
  Warning  Unhealthy  4m44s  kubelet            Readiness probe failed: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017

test-mongodb-1

$ kubectl logs test-mongodb-1
�[38;5;6mmongodb �[38;5;5m23:23:07.25 �[0m�[38;5;2mINFO �[0m ==> Advertised Hostname: test-mongodb-1.test-mongodb-headless.default.svc.cluster.local
�[38;5;6mmongodb �[38;5;5m23:23:07.25 �[0m�[38;5;2mINFO �[0m ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
�[38;5;6mmongodb �[38;5;5m23:23:07.26 �[0m�[38;5;2mINFO �[0m ==> Data dir empty, checking if the replica set already exists
�[38;5;6mmongodb �[38;5;5m23:23:07.79 �[0m�[38;5;2mINFO �[0m ==> Detected existing primary: test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017
�[38;5;6mmongodb �[38;5;5m23:23:07.79 �[0m�[38;5;2mINFO �[0m ==> Current primary is different from this node. Configuring the node as replica of test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017
�[38;5;6mmongodb �[38;5;5m23:23:07.81 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:23:07.81 �[0m�[38;5;2mINFO �[0m ==> �[1mWelcome to the Bitnami mongodb container�[0m
�[38;5;6mmongodb �[38;5;5m23:23:07.84 �[0m�[38;5;2mINFO �[0m ==> Subscribe to project updates by watching �[1mhttps://github.com/bitnami/containers�[0m
�[38;5;6mmongodb �[38;5;5m23:23:07.85 �[0m�[38;5;2mINFO �[0m ==> Submit issues and feature requests at �[1mhttps://github.com/bitnami/containers/issues�[0m
�[38;5;6mmongodb �[38;5;5m23:23:07.85 �[0m�[38;5;2mINFO �[0m ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit �[1mhttps://bitnami.com/enterprise�[0m
�[38;5;6mmongodb �[38;5;5m23:23:07.85 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:23:07.86 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB setup **
�[38;5;6mmongodb �[38;5;5m23:23:07.88 �[0m�[38;5;2mINFO �[0m ==> Validating settings in MONGODB_* env vars...
�[38;5;6mmongodb �[38;5;5m23:23:07.91 �[0m�[38;5;3mWARN �[0m ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
�[38;5;6mmongodb �[38;5;5m23:23:07.96 �[0m�[38;5;2mINFO �[0m ==> Initializing MongoDB...
�[38;5;6mmongodb �[38;5;5m23:23:08.06 �[0m�[38;5;2mINFO �[0m ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.244.0.29:27017
�[38;5;6mmongodb �[38;5;5m23:23:09.31 �[0m�[38;5;2mINFO �[0m ==> Creating users...
�[38;5;6mmongodb �[38;5;5m23:23:09.31 �[0m�[38;5;2mINFO �[0m ==> Users created
�[38;5;6mmongodb �[38;5;5m23:23:09.37 �[0m�[38;5;2mINFO �[0m ==> Configuring MongoDB replica set...
�[38;5;6mmongodb �[38;5;5m23:23:09.37 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:23:13.35 �[0m�[38;5;2mINFO �[0m ==> Trying to connect to MongoDB server test-mongodb-0.test-mongodb-headless.default.svc.cluster.local...
�[38;5;6mmongodb �[38;5;5m23:23:13.36 �[0m�[38;5;2mINFO �[0m ==> Found MongoDB server listening at test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017 !
�[38;5;6mmongodb �[38;5;5m23:23:14.76 �[0m�[38;5;2mINFO �[0m ==> MongoDB server listening and working at test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017 !
�[38;5;6mmongodb �[38;5;5m23:23:16.15 �[0m�[38;5;2mINFO �[0m ==> Primary node ready.
�[38;5;6mmongodb �[38;5;5m23:23:17.94 �[0m�[38;5;2mINFO �[0m ==> Adding node to the cluster
�[38;5;6mmongodb �[38;5;5m23:23:22.04 �[0m�[38;5;2mINFO �[0m ==> Node test-mongodb-1.test-mongodb-headless.default.svc.cluster.local:27017 is confirmed!
Current Mongosh Log ID:	664e7e6e1371e675132202d7
Connecting to:		mongodb://test-mongodb-1.test-mongodb-headless.default.svc.cluster.local:27017/admin?directConnection=true&appName=mongosh+2.2.5
Using MongoDB:		7.0.9
Using Mongosh:		2.2.5

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2024-05-22T23:23:10.443+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-05-22T23:23:11.434+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never' in this binary version
   2024-05-22T23:23:11.434+00:00: vm.max_map_count is too low
------

rs0 [direct: secondary] admin> DeprecationWarning: .setSecondaryOk() is deprecated. Use .setReadPref("primaryPreferred") instead
Setting read preference from "primary" to "primaryPreferred"

�[38;5;6mmongodb �[38;5;5m23:23:27.40 �[0m�[38;5;2mINFO �[0m ==> Waiting until initial data sync is complete...
�[38;5;6mmongodb �[38;5;5m23:23:29.11 �[0m�[38;5;2mINFO �[0m ==> initial data sync completed
�[38;5;6mmongodb �[38;5;5m23:23:29.14 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:23:49.19 �[0m�[38;5;2mINFO �[0m ==> ** MongoDB setup finished! **
rs0 [direct: secondary] admin> 
�[38;5;6mmongodb �[38;5;5m23:23:49.21 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB **
{"t":{"$date":"2024-05-22T23:23:49.247Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"main","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":true}}}}
$ kubectl describe po test-mongodb-1
Name:             test-mongodb-1
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Wed, 22 May 2024 20:23:06 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  apps.kubernetes.io/pod-index=1
                  controller-revision-hash=test-mongodb-949b889c7
                  helm.sh/chart=mongodb-15.4.5
                  statefulset.kubernetes.io/pod-name=test-mongodb-1
Annotations:      <none>
Status:           Running
IP:               10.244.0.29
IPs:
  IP:           10.244.0.29
Controlled By:  StatefulSet/test-mongodb
Containers:
  mongodb:
    Container ID:    docker://c5be28f396f44bb2516da50aad95cc2d4932227c2711fb1615d4395af4a2db48
    Image:           docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:        docker-pullable://bitnami/mongodb@sha256:62fb92d0111f8dc0565b7daf8a57279fd09520020b79fd3bc550deb3ae5aee70
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Wed, 22 May 2024 20:23:07 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      MY_POD_NAME:                      test-mongodb-1 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      MONGODB_ADVERTISED_HOSTNAME:      $(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8q69c (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-1
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-8q69c:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  7m55s  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         7m53s  default-scheduler  Successfully assigned default/test-mongodb-1 to minikube
  Normal   Pulled            7m52s  kubelet            Container image "docker.io/bitnami/mongodb:7.0.9-debian-12-r4" already present on machine
  Normal   Created           7m52s  kubelet            Created container mongodb
  Normal   Started           7m52s  kubelet            Started container mongodb
  Warning  Unhealthy         7m41s  kubelet            Readiness probe failed: Error: Not ready
  Warning  Unhealthy         7m20s  kubelet            Readiness probe failed: MongoServerSelectionError: The server is in quiesce mode and will shut down
  Warning  Unhealthy         7m12s  kubelet            Liveness probe failed: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
  Warning  Unhealthy         7m12s  kubelet            Readiness probe failed: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017

test-mongodb-2

$ kubectl logs test-mongodb-2
�[38;5;6mmongodb �[38;5;5m23:23:30.29 �[0m�[38;5;2mINFO �[0m ==> Advertised Hostname: test-mongodb-2.test-mongodb-headless.default.svc.cluster.local
�[38;5;6mmongodb �[38;5;5m23:23:30.29 �[0m�[38;5;2mINFO �[0m ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
�[38;5;6mmongodb �[38;5;5m23:23:30.29 �[0m�[38;5;2mINFO �[0m ==> Data dir empty, checking if the replica set already exists
�[38;5;6mmongodb �[38;5;5m23:23:30.76 �[0m�[38;5;2mINFO �[0m ==> Detected existing primary: test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017
�[38;5;6mmongodb �[38;5;5m23:23:30.76 �[0m�[38;5;2mINFO �[0m ==> Current primary is different from this node. Configuring the node as replica of test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017
�[38;5;6mmongodb �[38;5;5m23:23:30.77 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:23:30.77 �[0m�[38;5;2mINFO �[0m ==> �[1mWelcome to the Bitnami mongodb container�[0m
�[38;5;6mmongodb �[38;5;5m23:23:30.78 �[0m�[38;5;2mINFO �[0m ==> Subscribe to project updates by watching �[1mhttps://github.com/bitnami/containers�[0m
�[38;5;6mmongodb �[38;5;5m23:23:30.78 �[0m�[38;5;2mINFO �[0m ==> Submit issues and feature requests at �[1mhttps://github.com/bitnami/containers/issues�[0m
�[38;5;6mmongodb �[38;5;5m23:23:30.78 �[0m�[38;5;2mINFO �[0m ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit �[1mhttps://bitnami.com/enterprise�[0m
�[38;5;6mmongodb �[38;5;5m23:23:30.78 �[0m�[38;5;2mINFO �[0m ==> 
�[38;5;6mmongodb �[38;5;5m23:23:30.78 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB setup **
�[38;5;6mmongodb �[38;5;5m23:23:30.83 �[0m�[38;5;2mINFO �[0m ==> Validating settings in MONGODB_* env vars...
�[38;5;6mmongodb �[38;5;5m23:23:30.87 �[0m�[38;5;3mWARN �[0m ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
�[38;5;6mmongodb �[38;5;5m23:23:30.88 �[0m�[38;5;2mINFO �[0m ==> Initializing MongoDB...
�[38;5;6mmongodb �[38;5;5m23:23:30.97 �[0m�[38;5;2mINFO �[0m ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.244.0.30:27017
�[38;5;6mmongodb �[38;5;5m23:23:32.05 �[0m�[38;5;2mINFO �[0m ==> Creating users...
�[38;5;6mmongodb �[38;5;5m23:23:32.06 �[0m�[38;5;2mINFO �[0m ==> Users created
�[38;5;6mmongodb �[38;5;5m23:23:32.13 �[0m�[38;5;2mINFO �[0m ==> Configuring MongoDB replica set...
�[38;5;6mmongodb �[38;5;5m23:23:32.14 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:23:35.54 �[0m�[38;5;2mINFO �[0m ==> Trying to connect to MongoDB server test-mongodb-0.test-mongodb-headless.default.svc.cluster.local...
�[38;5;6mmongodb �[38;5;5m23:23:35.55 �[0m�[38;5;2mINFO �[0m ==> Found MongoDB server listening at test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017 !
�[38;5;6mmongodb �[38;5;5m23:23:36.63 �[0m�[38;5;2mINFO �[0m ==> MongoDB server listening and working at test-mongodb-0.test-mongodb-headless.default.svc.cluster.local:27017 !
�[38;5;6mmongodb �[38;5;5m23:23:37.84 �[0m�[38;5;2mINFO �[0m ==> Primary node ready.
�[38;5;6mmongodb �[38;5;5m23:23:39.23 �[0m�[38;5;2mINFO �[0m ==> Adding node to the cluster
�[38;5;6mmongodb �[38;5;5m23:23:43.45 �[0m�[38;5;2mINFO �[0m ==> Node test-mongodb-2.test-mongodb-headless.default.svc.cluster.local:27017 is confirmed!
Current Mongosh Log ID:	664e7e8392c7e685d12202d7
Connecting to:		mongodb://test-mongodb-2.test-mongodb-headless.default.svc.cluster.local:27017/admin?directConnection=true&appName=mongosh+2.2.5
Using MongoDB:		7.0.9
Using Mongosh:		2.2.5

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2024-05-22T23:23:33.206+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
   2024-05-22T23:23:34.368+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never' in this binary version
   2024-05-22T23:23:34.368+00:00: vm.max_map_count is too low
------

rs0 [direct: secondary] admin> DeprecationWarning: .setSecondaryOk() is deprecated. Use .setReadPref("primaryPreferred") instead
Setting read preference from "primary" to "primaryPreferred"

�[38;5;6mmongodb �[38;5;5m23:23:47.86 �[0m�[38;5;2mINFO �[0m ==> Waiting until initial data sync is complete...
�[38;5;6mmongodb �[38;5;5m23:23:48.96 �[0m�[38;5;2mINFO �[0m ==> initial data sync completed
�[38;5;6mmongodb �[38;5;5m23:23:48.96 �[0m�[38;5;2mINFO �[0m ==> Stopping MongoDB...
�[38;5;6mmongodb �[38;5;5m23:24:09.02 �[0m�[38;5;2mINFO �[0m ==> ** MongoDB setup finished! **
rs0 [direct: secondary] admin> 
�[38;5;6mmongodb �[38;5;5m23:24:09.03 �[0m�[38;5;2mINFO �[0m ==> ** Starting MongoDB **
{"t":{"$date":"2024-05-22T23:24:09.065Z"},"s":"I",  "c":"CONTROL",  "id":5760901, "ctx":"main","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":true}}}}
$ kubectl describe po test-mongodb-2
Name:             test-mongodb-2
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Wed, 22 May 2024 20:23:29 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  apps.kubernetes.io/pod-index=2
                  controller-revision-hash=test-mongodb-949b889c7
                  helm.sh/chart=mongodb-15.4.5
                  statefulset.kubernetes.io/pod-name=test-mongodb-2
Annotations:      <none>
Status:           Running
IP:               10.244.0.30
IPs:
  IP:           10.244.0.30
Controlled By:  StatefulSet/test-mongodb
Containers:
  mongodb:
    Container ID:    docker://ab1531da48646c60fe4b268fdd88242b116f4f11e604522b73f1b98c6a6e49ab
    Image:           docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:        docker-pullable://bitnami/mongodb@sha256:62fb92d0111f8dc0565b7daf8a57279fd09520020b79fd3bc550deb3ae5aee70
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Wed, 22 May 2024 20:23:30 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      MY_POD_NAME:                      test-mongodb-2 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      MONGODB_ADVERTISED_HOSTNAME:      $(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgwp8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-2
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-fgwp8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  8m53s                  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         8m52s                  default-scheduler  Successfully assigned default/test-mongodb-2 to minikube
  Normal   Pulled            8m51s                  kubelet            Container image "docker.io/bitnami/mongodb:7.0.9-debian-12-r4" already present on machine
  Normal   Created           8m51s                  kubelet            Created container mongodb
  Normal   Started           8m51s                  kubelet            Started container mongodb
  Warning  Unhealthy         8m40s                  kubelet            Readiness probe failed: Error: Not ready
  Warning  Unhealthy         8m19s (x2 over 8m29s)  kubelet            Readiness probe failed: MongoServerSelectionError: The server is in quiesce mode and will shut down

Services

Now I'd like a LoadBalancer service in order to access the database from MongoDB Compass, but it seems that only allows the RS to be accessed from within the cluster.

$ kubectl get svc
NAME                            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes                      ClusterIP   10.96.0.1    <none>        443/TCP     16m
test-mongodb-arbiter-headless   ClusterIP   None         <none>        27017/TCP   16m
test-mongodb-headless           ClusterIP   None         <none>        27017/TCP   16m

@carrodher
Copy link
Member

If the values are set in the same way in both cases, there shouldn't be any difference between deploying the chart in a standalone way versus deploying it as a dependency...

Regarding access, you can check the installation notes using helm get notes (see https://helm.sh/docs/helm/helm_get_notes/) where the different accessibility options are shown based on the provided parameters.

@Neniel
Copy link
Author

Neniel commented May 23, 2024

Notes are shown when deployment of the chart is done in the standalone way (which works fine). The issue is when using it as a dependency.

This is the config I have in my values.yaml

mongodb:
  architecture: replicaset
  replicaCount: 3
  externalAccess:
    enabled: true
    service:
      type: LoadBalancer
    autoDiscovery:
      enabled: true
  serviceAccount:
    create: true
  automountServiceAccountToken: true
  rbac:
    create: true
  auth:
    enabled: false

It works fine if I set externalAccess.enabled to false, but when I set it to true I notice two additional things:

1. Unknown field:

automountServiceAccountToken is warned as unknown during installation and upgrade:

$ helm install test test1
W0523 07:53:09.655358   75876 warnings.go:70] unknown field "spec.template.spec.initContainers[0].automountServiceAccountToken"
NAME: test
LAST DEPLOYED: Thu May 23 07:53:08 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

2. One MongoDB pod is trying to initialize:

There's a pod named as test-mongodb-0 that is trying to initialize but the logs shows the following:

$ kubectl logs test-mongodb-0
Defaulted container "mongodb" out of: mongodb, auto-discovery (init)
Error from server (BadRequest): container "mongodb" in pod "test-mongodb-0" is waiting to start: PodInitializing

This is the description of the pod:

$ kubectl describe po test-mongodb-0
Name:             test-mongodb-0
Namespace:        default
Priority:         0
Service Account:  test-mongodb
Node:             minikube/192.168.49.2
Start Time:       Thu, 23 May 2024 07:53:30 -0300
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=7.0.9
                  controller-revision-hash=test-mongodb-59c5c7c7d6
                  helm.sh/chart=mongodb-15.4.5
                  statefulset.kubernetes.io/pod-name=test-mongodb-0
Annotations:      <none>
Status:           Pending
IP:               10.244.0.77
IPs:
  IP:           10.244.0.77
Controlled By:  StatefulSet/test-mongodb
Init Containers:
  auto-discovery:
    Container ID:  docker://c8ae022f5a938cca537491940d1c2baaf7c6ea10c01851db68018fe29c6e4e8e
    Image:         docker.io/bitnami/kubectl:1.30.1-debian-12-r0
    Image ID:      docker-pullable://bitnami/kubectl@sha256:0aef4af32ece80e21c32ab31438252f32d84ebe35035faafedc4fde184075b4f
    Port:          <none>
    Host Port:     <none>
    Command:
      /scripts/auto-discovery.sh
    State:          Running
      Started:      Thu, 23 May 2024 07:56:27 -0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 23 May 2024 07:54:53 -0300
      Finished:     Thu, 23 May 2024 07:56:13 -0300
    Ready:          False
    Restart Count:  2
    Limits:
      cpu:                150m
      ephemeral-storage:  1Gi
      memory:             192Mi
    Requests:
      cpu:                100m
      ephemeral-storage:  50Mi
      memory:             128Mi
    Environment:
      MY_POD_NAME:  test-mongodb-0 (v1:metadata.name)
      SHARED_FILE:  /shared/info.txt
    Mounts:
      /scripts/auto-discovery.sh from scripts (rw,path="auto-discovery.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lfn68 (ro)
Containers:
  mongodb:
    Container ID:  
    Image:         docker.io/bitnami/mongodb:7.0.9-debian-12-r4
    Image ID:      
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      /scripts/setup.sh
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:            exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    false
      SHARED_FILE:                      /shared/info.txt
      MY_POD_NAME:                      test-mongodb-0 (v1:metadata.name)
      MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
      MY_POD_HOST_IP:                    (v1:status.hostIP)
      K8S_SERVICE_NAME:                 test-mongodb-headless
      MONGODB_INITIAL_PRIMARY_HOST:     test-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      MONGODB_REPLICA_SET_NAME:         rs0
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /.mongodb from empty-dir (rw,path="mongosh-home")
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
      /scripts/setup.sh from scripts (rw,path="setup.sh")
      /shared from shared (rw)
      /tmp from empty-dir (rw,path="tmp-dir")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lfn68 (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-test-mongodb-0
    ReadOnly:   false
  empty-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-common-scripts
    Optional:  false
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-mongodb-scripts
    Optional:  false
  kube-api-access-lfn68:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m54s                default-scheduler  Successfully assigned default/test-mongodb-0 to minikube
  Warning  BackOff    70s                  kubelet            Back-off restarting failed container auto-discovery in pod test-mongodb-0_default(31711168-c16c-49b6-a7bd-c513f65c8fd2)
  Normal   Pulled     57s (x3 over 3m53s)  kubelet            Container image "docker.io/bitnami/kubectl:1.30.1-debian-12-r0" already present on machine
  Normal   Created    57s (x3 over 3m53s)  kubelet            Created container auto-discovery
  Normal   Started    57s (x3 over 3m53s)  kubelet            Started container auto-discovery

@carrodher
Copy link
Member

I'm not sure whether the issue may not be directly related to the Bitnami Helm chart, but rather to how the application is being utilized or configured in your specific environment.

Having said that, if you think that's not the case and are interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

If you have any questions about the application itself, customizing its content, or questions about technology and infrastructure usage, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.

Copy link

github-actions bot commented Jun 9, 2024

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Jun 9, 2024
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@bitnami-bot bitnami-bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mongodb solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

4 participants