Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect to MongoDB #1017

Open
Stolr opened this issue May 3, 2019 · 14 comments

Comments

Projects
5 participants
@Stolr
Copy link

commented May 3, 2019

Hi ,

When using the helm chart

helm install --name kubeapps --namespace default bitnami/kubeapps

My pods are not running.

apprepo-sync-bitnami-1556892000-l779v                        0/1     CrashLoopBackOff   5          7m5s
apprepo-sync-incubator-1556892000-bm5td                      0/1     CrashLoopBackOff   5          7m5s
apprepo-sync-stable-1556892000-b5nw7                         0/1     CrashLoopBackOff   5          7m5s
apprepo-sync-svc-cat-1556892000-nzs5s                        0/1     CrashLoopBackOff   5          7m5s
kubeapps-9699fc54-g944l                                      0/1     CrashLoopBackOff   10         31m
kubeapps-9699fc54-pjmkt                                      0/1     CrashLoopBackOff   10         31m
kubeapps-internal-apprepository-controller-76c654d75-tgwpv   1/1     Running            0          31m
kubeapps-internal-chartsvc-55b768f59d-29njw                  0/1     Running            4          3m38s
kubeapps-internal-chartsvc-55b768f59d-l5ggf                  0/1     CrashLoopBackOff   3          3m38s
kubeapps-internal-chartsvc-58447cbb6f-h66m7                  0/1     CrashLoopBackOff   3          3m38s
kubeapps-internal-dashboard-584bb4b48b-566tt                 1/1     Running            0          31m
kubeapps-internal-dashboard-584bb4b48b-gx9zg                 1/1     Running            0          31m
kubeapps-internal-tiller-proxy-cdf6f4b54-4jnlc               1/1     Running            0          31m
kubeapps-internal-tiller-proxy-cdf6f4b54-5p52x               1/1     Running            0          31m
kubeapps-mongodb-658d4748ff-rc7fn                            1/1     Running            0          5m43s

Getting this error

kubectl logs -f  kubeapps-internal-chartsvc-55b768f59d-29njw

level=fatal msg="unable to connect to MongoDB" host=kubeapps-mongodb

@project-bot project-bot bot added this to Inbox in Kubeapps May 3, 2019

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented May 3, 2019

Hi @Stolr,

It's normal that the chartsvc restart a few times while the mongodb pod starts. That should be running eventually. The same for the *-sync-* jobs.

The kubeapps-9699fc54-* pods should not be crashing though. What are the logs there?

@prydonius

This comment has been minimized.

Copy link
Member

commented May 3, 2019

Looks like it might be a possible DNS issue, the kubeapps-9699fc54- can crash if it's unable to reach the dashboard pods.

@FengyunPan2

This comment has been minimized.

Copy link

commented May 13, 2019

meet it too

@prydonius

This comment has been minimized.

Copy link
Member

commented May 13, 2019

@Stolr are you still experiencing this issue?

@FengyunPan2 are you able to run a Pod in the same namespace and connect to kubeapps-mongodb? If not, it is likely a DNS issue in your cluster.

@FengyunPan2

This comment has been minimized.

Copy link

commented May 14, 2019

Thank for your answer.
My app and mongodb are created into 'kube-system' namespace, it works bad.
But when I rebuild them into other namespaces, it works ok. Why?

@Stolr

This comment has been minimized.

Copy link
Author

commented May 14, 2019

@prydonius Looks ok now. But I restarted all VMs in Azure and it started to work.

So yeah , you might need to wait a lot.

@bbrundert

This comment has been minimized.

Copy link

commented May 15, 2019

Hey everyone. I have the same issue... followed the instructions from https://github.com/kubeapps/kubeapps/tree/0dc038cd9aeac5289cf4514e85d5b3c45a1c6242/chart/kubeapps and used the LoadBalancer to expose the service.

kubectl get all -n kubeapps
NAME                                                              READY   STATUS             RESTARTS   AGE
pod/apprepo-sync-bitnami-1557943200-f59dr                         1/1     Running            2          2m14s
pod/apprepo-sync-incubator-1557943200-jspzg                       0/1     CrashLoopBackOff   2          2m14s
pod/apprepo-sync-stable-1557943200-x52g4                          0/1     CrashLoopBackOff   2          2m13s
pod/apprepo-sync-svc-cat-1557943200-5nsx5                         0/1     CrashLoopBackOff   2          2m13s
pod/kubeapps-556c58b878-28d8q                                     1/1     Running            0          145m
pod/kubeapps-556c58b878-wnlt5                                     1/1     Running            0          145m
pod/kubeapps-internal-apprepository-controller-578bcd448c-w78mc   1/1     Running            0          145m
pod/kubeapps-internal-chartsvc-55b768f59d-4lp6t                   0/1     CrashLoopBackOff   30         145m
pod/kubeapps-internal-chartsvc-55b768f59d-7fcv6                   0/1     CrashLoopBackOff   30         145m
pod/kubeapps-internal-dashboard-8487559f7f-nb8qv                  1/1     Running            0          145m
pod/kubeapps-internal-dashboard-8487559f7f-xxgbk                  1/1     Running            0          145m
pod/kubeapps-internal-tiller-proxy-7797d5c556-dw85p               1/1     Running            0          145m
pod/kubeapps-internal-tiller-proxy-7797d5c556-vc86t               1/1     Running            0          145m

NAME                                     TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)        AGE
service/kubeapps                         LoadBalancer   10.100.200.117   100.64.80.1,192.168.64.10   80:32767/TCP   145m
service/kubeapps-internal-chartsvc       ClusterIP      10.100.200.248   <none>                      8080/TCP       145m
service/kubeapps-internal-dashboard      ClusterIP      10.100.200.104   <none>                      8080/TCP       145m
service/kubeapps-internal-tiller-proxy   ClusterIP      10.100.200.51    <none>                      8080/TCP       145m
service/kubeapps-mongodb                 ClusterIP      10.100.200.201   <none>                      27017/TCP      145m

NAME                                                         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubeapps                                     2         2         2            2           145m
deployment.apps/kubeapps-internal-apprepository-controller   1         1         1            1           145m
deployment.apps/kubeapps-internal-chartsvc                   2         2         2            0           145m
deployment.apps/kubeapps-internal-dashboard                  2         2         2            2           145m
deployment.apps/kubeapps-internal-tiller-proxy               2         2         2            2           145m
deployment.apps/kubeapps-mongodb                             1         0         0            0           145m

NAME                                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/kubeapps-556c58b878                                     2         2         2       145m
replicaset.apps/kubeapps-internal-apprepository-controller-578bcd448c   1         1         1       145m
replicaset.apps/kubeapps-internal-chartsvc-55b768f59d                   2         2         0       145m
replicaset.apps/kubeapps-internal-dashboard-8487559f7f                  2         2         2       145m
replicaset.apps/kubeapps-internal-tiller-proxy-7797d5c556               2         2         2       145m
replicaset.apps/kubeapps-mongodb-89c57f6f9                              1         0         0       145m

NAME                                          COMPLETIONS   DURATION   AGE
job.batch/apprepo-sync-bitnami-1557939600     0/1           62m        62m
job.batch/apprepo-sync-bitnami-1557943200     0/1           2m15s      2m15s
job.batch/apprepo-sync-bitnami-zjnpx          0/1           145m       145m
job.batch/apprepo-sync-incubator-1557939600   0/1           62m        62m
job.batch/apprepo-sync-incubator-1557943200   0/1           2m15s      2m15s
job.batch/apprepo-sync-incubator-j4xn2        0/1           145m       145m
job.batch/apprepo-sync-stable-1557939600      0/1           62m        62m
job.batch/apprepo-sync-stable-1557943200      0/1           2m14s      2m15s
job.batch/apprepo-sync-stable-7l5mq           0/1           145m       145m
job.batch/apprepo-sync-svc-cat-1557939600     0/1           62m        62m
job.batch/apprepo-sync-svc-cat-1557943200     0/1           2m14s      2m14s
job.batch/apprepo-sync-svc-cat-8mtcm          0/1           145m       145m

NAME                                   SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/apprepo-sync-bitnami     0 * * * *   False     1        2m24s           145m
cronjob.batch/apprepo-sync-incubator   0 * * * *   False     1        2m24s           145m
cronjob.batch/apprepo-sync-stable      0 * * * *   False     1        2m24s           145m
cronjob.batch/apprepo-sync-svc-cat     0 * * * *   False     1        2m24s           145m

CRD is there:

$ kubectl get customresourcedefinitions
NAME                           CREATED AT
apprepositories.kubeapps.com   2019-05-15T14:58:22Z

Looking at the logs:

$ kubectl logs kubeapps-internal-chartsvc-55b768f59d-4lp6t -n kubeapps
time="2019-05-15T16:08:30Z" level=fatal msg="unable to connect to MongoDB" host=kubeapps-mongodb

Looking at the pod:

$ kubectl describe pod apprepo-sync-incubator-1557943200-jspzg -n kubeapps
Name:               apprepo-sync-incubator-1557943200-jspzg
Namespace:          kubeapps
Priority:           0
PriorityClassName:  <none>
Node:               f23e26b4-1e71-4ab7-b614-76750208df85/10.246.0.4
Start Time:         Wed, 15 May 2019 20:00:09 +0200
Labels:             apprepositories.kubeapps.com/repo-name=incubator
                    controller-uid=47908013-773b-11e9-b6dc-00505686e046
                    job-name=apprepo-sync-incubator-1557943200
Annotations:        <none>
Status:             Running
IP:                 172.31.5.3
Controlled By:      Job/apprepo-sync-incubator-1557943200
Containers:
  sync:
    Container ID:  docker://4b391b673e36191ba8350acfe30416bad5e5ef8cbb455c54490a1c93e2bc52ae
    Image:         docker.io/bitnami/kubeapps-chart-repo:1.4.0-r1
    Image ID:      docker-pullable://bitnami/kubeapps-chart-repo@sha256:8640a18ff79060cc96691364bec7441baee8a10c953e110a250c348d8d0cb7c7
    Port:          <none>
    Host Port:     <none>
    Command:
      /chart-repo
    Args:
      sync
      --mongo-url=kubeapps-mongodb
      --mongo-user=root
      --user-agent-comment=kubeapps/v1.3.2
      incubator
      https://kubernetes-charts-incubator.storage.googleapis.com
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 15 May 2019 20:03:46 +0200
      Finished:     Wed, 15 May 2019 20:04:16 +0200
    Ready:          False
    Restart Count:  4
    Environment:
      MONGO_PASSWORD:  <set to the key 'mongodb-root-password' in secret 'kubeapps-mongodb'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jf4k (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-4jf4k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jf4k
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                                           Message
  ----     ------     ----                  ----                                           -------
  Normal   Scheduled  5m29s                 default-scheduler                              Successfully assigned kubeapps/apprepo-sync-incubator-1557943200-jspzg to f23e26b4-1e71-4ab7-b614-76750208df85
  Normal   Pulled     112s (x5 over 5m25s)  kubelet, f23e26b4-1e71-4ab7-b614-76750208df85  Container image "docker.io/bitnami/kubeapps-chart-repo:1.4.0-r1" already present on machine
  Normal   Created    112s (x5 over 5m25s)  kubelet, f23e26b4-1e71-4ab7-b614-76750208df85  Created container
  Normal   Started    112s (x5 over 5m25s)  kubelet, f23e26b4-1e71-4ab7-b614-76750208df85  Started container
  Warning  BackOff    17s (x13 over 4m21s)  kubelet, f23e26b4-1e71-4ab7-b614-76750208df85  Back-off restarting failed container

Is <set to the key 'mongodb-root-password' in secret 'kubeapps-mongodb'> expected in the describe pod output?

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented May 15, 2019

@bbrundert the problem in your case is that the MongoDB deployment is not started.

Whats the output of executing?:

kubectl describe deployment -n kubeapps kubeapps-mongodb 
kubectl get -o yaml -n kubeapps kubeapps-mongodb

That should give you more info about the issue.

@bbrundert

This comment has been minimized.

Copy link

commented May 15, 2019

Thanks for jumping in @andresmgot !

I did the following (based on what I saw in #577):

helm delete --purge kubeapps
helm repo update
helm install --name kubeapps --namespace kubeapps bitnami/kubeapps --set frontend.service.type=LoadBalancer --set mongodb.securityContext.enable=false --set mongodb.mongodbEnableIPv6=false

Outputs:

$ kubectl describe deployment -n kubeapps kubeapps-mongodb
Name:                   kubeapps-mongodb
Namespace:              kubeapps
CreationTimestamp:      Wed, 15 May 2019 21:07:15 +0200
Labels:                 app=mongodb
                        chart=mongodb-4.9.0
                        heritage=Tiller
                        release=kubeapps
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=mongodb,release=kubeapps
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  app=mongodb
           chart=mongodb-4.9.0
           release=kubeapps
  Containers:
   kubeapps-mongodb:
    Image:      docker.io/bitnami/mongodb:4.0.3
    Port:       27017/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:      50m
      memory:   256Mi
    Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_ROOT_PASSWORD:  <set to the key 'mongodb-root-password' in secret 'kubeapps-mongodb'>  Optional: false
      MONGODB_USERNAME:
      MONGODB_DATABASE:
      MONGODB_ENABLE_IPV6:    no
      MONGODB_EXTRA_FLAGS:
    Mounts:
      /bitnami/mongodb from data (rw)
  Volumes:
   data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        True    MinimumReplicasAvailable
  ReplicaFailure   True    FailedCreate
OldReplicaSets:    <none>
NewReplicaSet:     kubeapps-mongodb-599d59565f (0/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m38s  deployment-controller  Scaled up replica set kubeapps-mongodb-599d59565f to 1

And:

$ kubectl get -o yaml -n kubeapps deployment kubeapps-mongodb
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-05-15T19:07:15Z"
  generation: 1
  labels:
    app: mongodb
    chart: mongodb-4.9.0
    heritage: Tiller
    release: kubeapps
  name: kubeapps-mongodb
  namespace: kubeapps
  resourceVersion: "5568042"
  selfLink: /apis/extensions/v1beta1/namespaces/kubeapps/deployments/kubeapps-mongodb
  uid: a6fc2993-7744-11e9-b6dc-00505686e046
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: mongodb
      release: kubeapps
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mongodb
        chart: mongodb-4.9.0
        release: kubeapps
    spec:
      containers:
      - env:
        - name: MONGODB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mongodb-root-password
              name: kubeapps-mongodb
        - name: MONGODB_USERNAME
        - name: MONGODB_DATABASE
        - name: MONGODB_ENABLE_IPV6
          value: "no"
        - name: MONGODB_EXTRA_FLAGS
        image: docker.io/bitnami/mongodb:4.0.3
        imagePullPolicy: Always
        livenessProbe:
          exec:
            command:
            - mongo
            - --eval
            - db.adminCommand('ping')
          failureThreshold: 6
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: kubeapps-mongodb
        ports:
        - containerPort: 27017
          name: mongodb
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - mongo
            - --eval
            - db.adminCommand('ping')
          failureThreshold: 6
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 50m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /bitnami/mongodb
          name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1001
        runAsUser: 1001
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: data
status:
  conditions:
  - lastTransitionTime: "2019-05-15T19:07:15Z"
    lastUpdateTime: "2019-05-15T19:07:15Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-05-15T19:07:15Z"
    lastUpdateTime: "2019-05-15T19:07:15Z"
    message: 'pods "kubeapps-mongodb-599d59565f-kjfkx" is forbidden: pod.Spec.SecurityContext.RunAsUser
      is forbidden'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 1
  unavailableReplicas: 1

Looking at my API server, it looks like it is set to --enable-admission-plugins=SecurityContextDeny,DenyEscalatingExec. I guess I'll have to enable privileged containers for the cluster and will try again.

@bbrundert

This comment has been minimized.

Copy link

commented May 15, 2019

And it worked out fine... I had to enable the setting "Enable Privileged Containers" in my "Plans" configuration in VMware Enterprise PKS for this to work. It removed the flag --enable-admission-plugins=SecurityContextDeny,DenyEscalatingExec on my API server after redeployed the K8s cluster. mongodb and kubeapps are now up and running.

$ kubectl get -o yaml -n kubeapps deployment kubeapps-mongodb
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-05-15T19:07:15Z"
  generation: 1
  labels:
    app: mongodb
    chart: mongodb-4.9.0
    heritage: Tiller
    release: kubeapps
  name: kubeapps-mongodb
  namespace: kubeapps
  resourceVersion: "5572603"
  selfLink: /apis/extensions/v1beta1/namespaces/kubeapps/deployments/kubeapps-mongodb
  uid: a6fc2993-7744-11e9-b6dc-00505686e046
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: mongodb
      release: kubeapps
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mongodb
        chart: mongodb-4.9.0
        release: kubeapps
    spec:
      containers:
      - env:
        - name: MONGODB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mongodb-root-password
              name: kubeapps-mongodb
        - name: MONGODB_USERNAME
        - name: MONGODB_DATABASE
        - name: MONGODB_ENABLE_IPV6
          value: "no"
        - name: MONGODB_EXTRA_FLAGS
        image: docker.io/bitnami/mongodb:4.0.3
        imagePullPolicy: Always
        livenessProbe:
          exec:
            command:
            - mongo
            - --eval
            - db.adminCommand('ping')
          failureThreshold: 6
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: kubeapps-mongodb
        ports:
        - containerPort: 27017
          name: mongodb
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - mongo
            - --eval
            - db.adminCommand('ping')
          failureThreshold: 6
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 50m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /bitnami/mongodb
          name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1001
        runAsUser: 1001
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: data
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-05-15T19:07:15Z"
    lastUpdateTime: "2019-05-15T19:07:15Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
@andresmgot

This comment has been minimized.

Copy link
Contributor

commented May 15, 2019

oh, I see the problem @bbrundert. To avoid using the securityContext configuration you can also set the flag --set mongodb.securityContext.enabled=false when installing the chart. The MongoDB image is configured anyway to not use the root user so that won't be a problem.

@bbrundert

This comment has been minimized.

Copy link

commented May 15, 2019

I tried that but it didn't work out (see my previous comment). Wrote a quick blogpost for the Enterprise PKS users that potentially run into the same issue while testing: http://blog.think-v.com/?p=5740

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented May 15, 2019

Oh, interesting, I missed that comment. I think I know what happened, in the command you sent you are using mongodb.securityContext.enable while it should be mongodb.securityContext.enabled (see the trailing d).

Could that be your issue?

BTW, cool article!

@bbrundert

This comment has been minimized.

Copy link

commented May 16, 2019

You have the eyes of an eagle! Thanks for spotting this! You are absolutely right. Chaning it to mongodb.securityContext.enabled, it worked out on the k8s cluster without enabling priviledged containers (using --enable-admission-plugins=SecurityContextDeny,DenyEscalatingExec was fine after all). I redeployed my K8s cluster with the flag on the API Server again and redeployed the helm chart and it's all fine now. Updated the blogpost and mentioned you as well :) Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.