New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mongodb chart not able to run #2488

Closed
zhiminwen opened this Issue Oct 14, 2017 · 10 comments

Comments

Projects
None yet
5 participants
@zhiminwen
Contributor

zhiminwen commented Oct 14, 2017

Is this a request for help?:

Yes. Validate mongo chart is working
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug Report

Version of Helm and Kubernetes:
Helm: 2.5
K8: 1.7.3

Which chart:
mongodb

What happened:
When deployed the chart, the pod failed. logs showing the message as

kubectl logs -f voting-tapir-mongodb-2498257007-psvsd

Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
Send us your feedback at containers@bitnami.com

nami    INFO  Initializing mongodb
Error executing 'postInstallation': Group '2008' not found

What you expected to happen:
Pods running

How to reproduce it (as minimally and precisely as possible):

helm install --set image=bitnami/mongodb:3.4.9-r1,mongodbRootPassword=password,mongodbDatabase=my-db,serviceType=NodePort stable/mongodb

Anything else we need to know:

@fejta-bot

This comment has been minimized.

fejta-bot commented Jan 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@jtgans

This comment has been minimized.

jtgans commented Feb 5, 2018

I, too, have run into this problem. Logs from my installation:

Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
Send us your feedback at containers@bitnami.com
nami    INFO  Initializing mongodb
Error executing 'postInstallation': User '2010' not found 
@jtgans

This comment has been minimized.

jtgans commented Feb 5, 2018

Changing to a non-persistent backend fixed the issue, but now I don't have persistent data storage for mongo.

@zhiminwen

This comment has been minimized.

Contributor

zhiminwen commented Feb 25, 2018

I think this was due to the GlusterFS would assign a random gid > 2000. I fixed this by putting a init container to own back the file system to root.

@jtgans

This comment has been minimized.

jtgans commented Feb 25, 2018

Sorry, what do you mean by "GlusterFS would assign a random gid >2000", @zhiminwen?

Do you have the in it container available publicly so others can see what you did to resolve this?

@jtgans

This comment has been minimized.

jtgans commented Feb 25, 2018

/remove-lifecycle stale

@zhiminwen

This comment has been minimized.

Contributor

zhiminwen commented Feb 26, 2018

My fix is to add the following to the deployment file.

      initContainers:
        - name: init-myservice
          image: busybox
          command: ["sh", "-c", "chown root:root /bitnami /bitnami/mongodb"]
          volumeMounts:
            - name: data
              mountPath: /bitnami/mongodb

@pbutler

This comment has been minimized.

pbutler commented Apr 11, 2018

Can confirm I see this problem (the group id of the PVC mount is set to an odd, possibly random, value and causes mongo to fail to run) too when using glusterfs and the above fix works.

@fejta-bot

This comment has been minimized.

fejta-bot commented Jul 10, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@stale

This comment has been minimized.

stale bot commented Aug 8, 2018

This issue is being automatically closed due to inactivity.

@stale stale bot closed this Aug 8, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment