New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

monocular CrashLoopBackOff with mongo in pending #419

Closed
kumiisc opened this Issue Jan 12, 2018 · 12 comments

Comments

Projects
None yet
4 participants
@kumiisc

kumiisc commented Jan 12, 2018

I have installed monocular using helm and the pod seems to crashloop.
monogo db pod is in pending status. No logs in monogo pod.
Can some guide where the issue is ?

monocular-mongodb-3297385788-2sf4d 0/1 Pending 0 19m
monocular-monocular-api-4206649573-6pmm6 0/1 CrashLoopBackOff 8 19m
monocular-monocular-api-4206649573-vql6m 0/1 CrashLoopBackOff 8 19m
monocular-monocular-prerender-2060794013-n8k1v 1/1 Running 0 19m
monocular-monocular-ui-2374440452-rglv5 1/1 Running 0 19m
monocular-monocular-ui-2374440452-wwq7p 1/1 Running 0 19m
ngix-nginx-ingress-controller-593623324-wsnkt 1/1 Running 0 22m
ngix-nginx-ingress-default-backend-2369107551-j9253 1/1 Running 0 22m
ui-kubernetes-dashboard-2399210613-1tf6r 1/1 Running 0 20h

@prydonius

This comment has been minimized.

Member

prydonius commented Jan 12, 2018

Hey @kumiisc, this is most likely because the PVC for MongoDB isn't getting bound. Can you check kubectl get pvc and verify that? Do you have a storage class setup in your cluster?

@itzamnamx

This comment has been minimized.

itzamnamx commented Mar 19, 2018

I have the same issue, and I got a PVC in my cluster using GlusterFS, the status for mondoDB is pending

kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gluster-dyn-pvc Bound pvc-7c29c476-2b91-11e8-9dca-b82a729ac5fa 3G RWX gluster-heketi-external 3h
my-mono-mongodb Pending 5m

Running the descrive pvc I realized that the Helm Chart (or Kubernetes? cannot reach the external GlusterFS cluster.

$ kubectl describe pvc
Name: volted-otter-mongodb
Namespace: default
StorageClass: gluster-heketi-external
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message


Warning ProvisioningFailed 11m persistentvolume-controller Failed to provision volume with StorageClass "gluster-heketi-external": create volume error: error creating volume dial tcp: lookup gfs03.zacatenco.fintecheando.mx on 192.168.123.1:53: no such host
Warning ProvisioningFailed 2m (x2 over 6m) persistentvolume-controller Failed to provision volume with StorageClass "gluster-heketi-external": create volume error: error creating volume dial tcp: lookup gfs01.zacatenco.fintecheando.mx on 192.168.123.1:53: no such host

Still looking for a possible solution, which I think should be related to Kubernetes DNS.

After fixing the DNS issues I got this message

$ kubectl logs kilted-porcupine-mongodb-66d7bf586f-wrb2c

Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
Send us your feedback at containers@bitnami.com

nami INFO Initializing mongodb
Error executing 'postInstallation': Group '2000' not found

I think it could be related to this issue kubeapps/kubeapps#92 and this one helm/charts#2488

@prydonius

This comment has been minimized.

Member

prydonius commented Mar 20, 2018

@itzamnamx were you able to try the fix in helm/charts#2488 (comment)?

@itzamnamx

This comment has been minimized.

itzamnamx commented Mar 20, 2018

@prydonius I have apply that fix in a forked version of MongoDB https://github.com/fintecheando/charts/blob/master/stable/mongodb/templates/deployment.yaml but later how can I make reference in Monocultar to take that image?

Regards

@prydonius

This comment has been minimized.

Member

prydonius commented Mar 21, 2018

@itzamnamx you can fork the chart and apply it there until the change is accepted upstream

@itzamnamx

This comment has been minimized.

itzamnamx commented Mar 22, 2018

@prydonius

This comment has been minimized.

Member

prydonius commented Mar 22, 2018

@itzamnamx yes, for the forked chart you have two options. You can remove the requirements.yaml/lock files, and place your modified MongoDB chart inside the charts/ directory at the root of the chart. Or if you have a chart repository, you can change the URL and version in requirements.yaml (https://github.com/kubernetes-helm/monocular/blob/master/deployment/monocular/requirements.yaml#L4).

@iam-merlin

This comment has been minimized.

Contributor

iam-merlin commented Jul 31, 2018

I just try to install the chart and I've the same issue.

I forked the project, update the mongodb dep to 4.0.4 and it works perfectly, do you want a PR?

values.yaml :

# add the following lines
mongodb:
  mongodbDatabase: monocular

requirements.yaml

# bump chart version to 4.0.4
dependencies:
- name: mongodb
  version: 4.0.4
  repository: https://kubernetes-charts.storage.googleapis.com
@prydonius

This comment has been minimized.

Member

prydonius commented Jul 31, 2018

@iam-merlin sure a PR for that would be great! Do you know what change in MongoDB 4.0.4 that helps fix this?

@iam-merlin

This comment has been minimized.

Contributor

iam-merlin commented Jul 31, 2018

@prydonius to be honest, I don't know xD.

I just saw that your deps were outdated and I simply update it.

@prydonius

This comment has been minimized.

Member

prydonius commented Jul 31, 2018

@iam-merlin fair enough, I think it makes sense to update it so happy to accept a PR for that!

@iam-merlin

This comment has been minimized.

Contributor

iam-merlin commented Aug 1, 2018

@prydonius done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment