Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MongoDB Chart fails on windows host #827

Closed
dwitzig opened this issue Sep 27, 2018 · 33 comments
Closed

MongoDB Chart fails on windows host #827

dwitzig opened this issue Sep 27, 2018 · 33 comments
Labels
stale 15 days without activity

Comments

@dwitzig
Copy link

dwitzig commented Sep 27, 2018

mongoDB chart will continue to crash and restart on a windows 10 host with mongodb.persistence.enabled=true (persistence disabled works without issue)
running on docker-for-windows Version 18.06.1-ce-win73 (19507)
with kubernetes enabled

kubectl describe

kubectl describe pod dev-mongodb-6584dd75f5-vc9rp
Name:           dev-mongodb-6584dd75f5-vc9rp
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Wed, 26 Sep 2018 21:20:55 -0700
Labels:         app=mongodb
                pod-template-hash=2140883191
                release=dev
Annotations:    <none>
Status:         Running
IP:             10.1.0.62
Controlled By:  ReplicaSet/dev-mongodb-6584dd75f5
Containers:
  dev-mongodb:
    Container ID:   docker://4e8bdefbc5d8727d82bba976e5552ae4dd5b92e1bcee6c13c8f985aa12b5f1ab
    Image:          docker.io/bitnami/mongodb:3.6
    Image ID:       docker-pullable://bitnami/mongodb@sha256:a3b85168bcc94a329b96729683edd3ec731a8aac902c664ee6a3aeba0b5f5293
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    100
      Started:      Wed, 26 Sep 2018 21:21:50 -0700
      Finished:     Wed, 26 Sep 2018 21:21:59 -0700
    Ready:          False
    Restart Count:  1
    Liveness:       exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:      exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_ROOT_PASSWORD:  <set to the key 'mongodb-root-password' in secret 'dev-mongodb'>  Optional: false
      MONGODB_USERNAME:
      MONGODB_DATABASE:
      MONGODB_ENABLE_IPV6:    yes
      MONGODB_EXTRA_FLAGS:    --smallfiles --logpath=/dev/null
    Mounts:
      /bitnami/mongodb from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-phmll (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-mongodb
    ReadOnly:   false
  default-token-phmll:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-phmll
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age   From                         Message
  ----     ------                 ----  ----                         -------
  Normal   Scheduled              1m    default-scheduler            Successfully assigned dev-mongodb-6584dd75f5-vc9rp to docker-for-desktop
  Normal   SuccessfulMountVolume  1m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pvc-b8423bfb-c20c-11e8-bb65-00155d016f01"
  Normal   SuccessfulMountVolume  1m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-phmll"
  Warning  Unhealthy              44s   kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:23.949+0000 I NETWORK  [thread1] Socket recv() Connection reset by peer 127.0.0.1:27017
2018-09-27T04:21:24.042+0000 I NETWORK  [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [127.0.0.1:27017]
2018-09-27T04:21:24.058+0000 E QUERY    [thread1] Error: network error while attempting to run command 'isMaster' on host '127.0.0.1:27017'  :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
  Warning  Unhealthy  34s  kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:33.631+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:34.401+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
  Warning  Unhealthy  28s  kubelet, docker-for-desktop  Liveness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:40.145+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:40.145+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

mongodb logs

�[0m
�[0m�[1mWelcome to the Bitnami mongodb container�[0m
�[0mSubscribe to project updates by watching �[1mhttps://github.com/bitnami/bitnami-docker-mongodb�[0m
�[0mSubmit issues and feature requests at �[1mhttps://github.com/bitnami/bitnami-docker-mongodb/issues�[0m
�[0m
nami    INFO  Initializing mongodb
mongodb INFO  ==> Deploying MongoDB with persisted data...
mongodb INFO  ==> No injected configuration files found. Creating default config files...
mongodb INFO
mongodb INFO  ########################################################################
mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Persisted data and properties have been restored.
mongodb INFO     Any input specified will not take effect.
mongodb INFO   This installation requires no credentials.
mongodb INFO  ########################################################################
mongodb INFO
nami    INFO  mongodb successfully initialized
�[0m�[38;5;2mINFO �[0m ==> Starting mongodb...
�[0m�[38;5;2mINFO �[0m ==> Starting mongod...
2018-09-27T04:34:00.223+0000 I CONTROL  [initandlisten] MongoDB starting : pid=34 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=dev-mongodb-7b9d7bbdd9-bglkr
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] db version v3.6.8
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] git version: 6bc9ed599c3fa164703346a22bad17e33fa913e4
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0f  25 May 2017
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] modules: none
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] build environment:
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     distmod: debian92
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true } }
2018-09-27T04:34:00.234+0000 I -        [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2018-09-27T04:34:00.237+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=478M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-09-27T04:34:00.732+0000 E STORAGE  [initandlisten] WiredTiger error (1) [1538022840:732507][34:0x7fdef0bb8580], file:WiredTiger.wt, connection: /opt/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted
2018-09-27T04:34:00.733+0000 E -        [initandlisten] Assertion: 28595:1: Operation not permitted src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421
2018-09-27T04:34:00.856+0000 I STORAGE  [initandlisten] exception in initAndListen: Location28595: 1: Operation not permitted, terminating
2018-09-27T04:34:00.857+0000 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2018-09-27T04:34:00.857+0000 I NETWORK  [initandlisten] removing socket file: /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2018-09-27T04:34:00.857+0000 I CONTROL  [initandlisten] now exiting
2018-09-27T04:34:00.857+0000 I CONTROL  [initandlisten] shutting down with code:100
@tompizmor
Copy link
Member

Hi @dwitzig,

MongoDB is a non-root container that runs as user 1001 by default. The volumes used by MongoDB should be owned by this user.
I think maybe the issue is that the securityContextfeature is not working fine in Docker for Windows.

Can you try installing the chart with the following options to launch it as root user?

helm install bitnami/mongodb --set securityContext.fsGroup=0,securityContext.runAsUser=0

@dwitzig
Copy link
Author

dwitzig commented Sep 27, 2018

Hi @tompizmor, thanks for your reply.
unfortunately running as root seems to have its own issues (see logs bellow)

I also confirmed my windows user ID is 1001

kubectl logs -f affable-mole-mongodb-b68c8b88d-cwvdd
�[0m
�[0m�[1mWelcome to the Bitnami mongodb container�[0m
�[0mSubscribe to project updates by watching �[1mhttps://github.com/bitnami/bitnami-docker-mongodb�[0m
�[0mSubmit issues and feature requests at �[1mhttps://github.com/bitnami/bitnami-docker-mongodb/issues�[0m
�[0m
nami    INFO  Initializing mongodb
mongodb INFO  ==> Deploying MongoDB from scratch...
mongodb INFO  ==> No injected configuration files found. Creating default config files...
mongodb INFO  ==> Creating root user...
mongodb INFO  ==> Enabling authentication...
mongodb INFO
mongodb INFO  ########################################################################
mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Root Password: **********
mongodb INFO   (Passwords are not shown for security reasons)
mongodb INFO  ########################################################################
mongodb INFO
nami    INFO  mongodb successfully initialized
�[0m�[38;5;2mINFO �[0m ==> Starting mongodb...
�[0m�[38;5;2mINFO �[0m ==> Starting mongod...
2018-09-27T17:57:35.180+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] MongoDB starting : pid=87 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=affable-mole-mongodb-b68c8b88d-cwvdd
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] db version v4.0.2
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] git version: fc1573ba18aee42f97a3bb13b67af7d837826b47
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0f  25 May 2017
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] modules: none
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] build environment:
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten]     distmod: debian92
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-09-27T17:57:35.183+0000 I CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "enabled" }, setParameter: { enableLocalhostAuthBypass: "false" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true } }
2018-09-27T17:57:35.188+0000 I STORAGE  [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2018-09-27T17:57:35.191+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=478M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-09-27T17:57:39.160+0000 I STORAGE  [initandlisten] WiredTiger message [1538071059:160714][87:0x7f7412fa39c0], txn-recover: Main recovery loop: starting at 1/28672
2018-09-27T17:57:39.583+0000 I STORAGE  [initandlisten] WiredTiger message [1538071059:582998][87:0x7f7412fa39c0], txn-recover: Recovering log 1 through 2
2018-09-27T17:57:40.032+0000 I STORAGE  [initandlisten] WiredTiger message [1538071060:32476][87:0x7f7412fa39c0], txn-recover: Recovering log 2 through 2
2018-09-27T17:57:40.067+0000 I STORAGE  [initandlisten] WiredTiger message [1538071060:67724][87:0x7f7412fa39c0], txn-recover: Set global recovery timestamp: 0
2018-09-27T17:57:40.067+0000 E STORAGE  [initandlisten] WiredTiger error (22) [1538071060:67787][87:0x7f7412fa39c0], WT_SESSION.checkpoint: __posix_sync, 108: /opt/bitnami/mongodb/data/db/journal: handle-sync: fdatasync: Invalid argument Raw: [1538071060:67787][87:0x7f7412fa39c0], WT_SESSION.checkpoint: __posix_sync, 108: /opt/bitnami/mongodb/data/db/journal: handle-sync: fdatasync: Invalid argument
2018-09-27T17:57:40.067+0000 E STORAGE  [initandlisten] WiredTiger error (-31804) [1538071060:67798][87:0x7f7412fa39c0], WT_SESSION.checkpoint: __wt_panic, 523: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1538071060:67798][87:0x7f7412fa39c0], WT_SESSION.checkpoint: __wt_panic, 523: the process must exit and restart: WT_PANIC: WiredTiger library panic
2018-09-27T17:57:40.067+0000 F -        [initandlisten] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 401
2018-09-27T17:57:40.067+0000 F -        [initandlisten]

***aborting after fassert() failure


2018-09-27T17:57:40.429+0000 F -        [initandlisten] Got signal: 6 (Aborted).
 0x55fc27119b81 0x55fc27118d99 0x55fc2711927d 0x7f74116d00c0 0x7f7411352fff 0x7f741135442a 0x55fc257654a3 0x55fc25857f56 0x55fc258c8cd9 0x55fc256f1424 0x55fc256f1844 0x55fc258933b3 0x55fc25956661 0x55fc258de2e1 0x55fc258dc01b 0x55fc258dc973 0x55fc258c218a 0x55fc258e1b94 0x55fc2586d027 0x55fc25869c94 0x55fc25835b65 0x55fc25819089 0x55fc25e63185 0x55fc256e9ff6 0x55fc257d0013 0x55fc25767039 0x7f74113402e1 0x55fc257ceaca
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"55FC24D61000","o":"23B8B81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55FC24D61000","o":"23B7D99"},{"b":"55FC24D61000","o":"23B827D"},{"b":"7F74116BF000","o":"110C0"},{"b":"7F7411320000","o":"32FFF","s":"gsignal"},{"b":"7F7411320000","o":"3442A","s":"abort"},{"b":"55FC24D61000","o":"A044A3","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"55FC24D61000","o":"AF6F56"},{"b":"55FC24D61000","o":"B67CD9"},{"b":"55FC24D61000","o":"990424","s":"__wt_err_func"},{"b":"55FC24D61000","o":"990844","s":"__wt_panic"},{"b":"55FC24D61000","o":"B323B3"},{"b":"55FC24D61000","o":"BF5661","s":"__wt_log_force_sync"},{"b":"55FC24D61000","o":"B7D2E1","s":"__wt_txn_checkpoint_log"},{"b":"55FC24D61000","o":"B7B01B"},{"b":"55FC24D61000","o":"B7B973","s":"__wt_txn_checkpoint"},{"b":"55FC24D61000","o":"B6118A"},{"b":"55FC24D61000","o":"B80B94","s":"__wt_txn_recover"},{"b":"55FC24D61000","o":"B0C027","s":"__wt_connection_workers"},{"b":"55FC24D61000","o":"B08C94","s":"wiredtiger_open"},{"b":"55FC24D61000","o":"AD4B65","s":"_ZN5mongo18WiredTigerKVEngineC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mbbbb"},{"b":"55FC24D61000","o":"AB8089"},{"b":"55FC24D61000","o":"1102185","s":"_ZN5mongo23initializeStorageEngineEPNS_14ServiceContextENS_22StorageEngineInitFlagsE"},{"b":"55FC24D61000","o":"988FF6"},{"b":"55FC24D61000","o":"A6F013","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"55FC24D61000","o":"A06039","s":"main"},{"b":"7F7411320000","o":"202E1","s":"__libc_start_main"},{"b":"55FC24D61000","o":"A6DACA","s":"_start"}],"processInfo":{ "mongodbVersion" : "4.0.2", "gitVersion" : "fc1573ba18aee42f97a3bb13b67af7d837826b47", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.9.93-linuxkit-aufs", "version" : "#1 SMP Wed Jun 6 16:55:56 UTC 2018", "machine" : "x86_64" }, "somap" : [ { "b" : "55FC24D61000", "elfType" : 3, "buildId" : "84470DCAFB1ADD028F30F0D7071C5C45B50DDFD5" }, { "b" : "7FFD17FF3000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "1EA93CF31322C04C34ACD821F4C4AD99A3056D39" }, { "b" : "7F7412B19000", "path" : "/usr/lib/x86_64-linux-gnu/libcurl.so.4", "elfType" : 3, "buildId" : "2C234671CF2A5791535AEC4F598953ED58DA2BD2" }, { "b" : "7F7412902000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "713D47D5F599289C0A91ADE8F0122B2B4AA78B2E" }, { "b" : "7F741246F000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1", "elfType" : 3, "buildId" : "2CFE882A331D7857E9CE1B5DE3255E6DA76EF899" }, { "b" : "7F7412203000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.1", "elfType" : 3, "buildId" : "E2AA3B39763D943F56B3BD05C8E36E639BA95E12" }, { "b" : "7F7411FFF000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "B895F0831F623C5F23603401D4069F9F94C24761" }, { "b" : "7F7411DF7000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "5D83E0642E645026DBB11F89F7DF7106BD821495" }, { "b" : "7F7411AF3000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "1B95E3A8B8788B07E4F59EE69B1877F9DEB42033" }, { "b" : "7F74118DC000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "51AD5FD294CD6C813BED40717347A53434B80B7A" }, { "b" : "7F74116BF000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "4285CD3158DDE596765C747AE210AB6CBD258B22" }, { "b" : "7F7411320000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "AA889E26A70F98FA8D230D088F7CC5BF43573163" }, { "b" : "7F7412D99000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "263F909DBE11A66F7C6233E3FF0521148D9F8370" }, { "b" : "7F74110FA000", "path" : "/usr/lib/x86_64-linux-gnu/libnghttp2.so.14", "elfType" : 3, "buildId" : "57FE530E3C6E81FD243F02556CDC09142D176A2E" }, { "b" : "7F7410ED8000", "path" : "/usr/lib/x86_64-linux-gnu/libidn2.so.0", "elfType" : 3, "buildId" : "52F90A61AFD6B0605DAC537C5D1B8713E8E93889" }, { "b" : "7F7410CBB000", "path" : "/usr/lib/x86_64-linux-gnu/librtmp.so.1", "elfType" : 3, "buildId" : "82864DDD2632F14010AD7740D09B7270901D418D" }, { "b" : "7F7410A8F000", "path" : "/usr/lib/x86_64-linux-gnu/libssh2.so.1", "elfType" : 3, "buildId" : "2250CA06323F7E7479640C9F53972E22ABFD1628" }, { "b" : "7F7410881000", "path" : "/usr/lib/x86_64-linux-gnu/libpsl.so.5", "elfType" : 3, "buildId" : "1667EE4ED5224694326899E760722B7B366CEB41" }, { "b" : "7F7410618000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.0.2", "elfType" : 3, "buildId" : "8D36AB4C9418B58CC011D77F38B7FA8B918E2CF8" }, { "b" : "7F74101B4000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.2", "elfType" : 3, "buildId" : "B26F780956D55C181206CBB67FEEFD665209C227" }, { "b" : "7F740FF69000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "4986F4E8DB61C236489DDC53213B04DB65A2EAA0" }, { "b" : "7F740FC8F000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "811575446A67638D151C4829E7040205D92F9C9B" }, { "b" : "7F740FA5C000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "19CE7A9BC33E0910065BDFE299DCACFF638BF06E" }, { "b" : "7F740F858000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "2EB9256EE03E4D411C25715BB6EC484BF9B09E66" }, { "b" : "7F740F649000", "path" : "/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2", "elfType" : 3, "buildId" : "EDE2EA44C0B018BBDB20D71A1C8AC99F0CC3F99F" }, { "b" : "7F740F3F8000", "path" : "/usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2", "elfType" : 3, "buildId" : "EB45F0CC6A96D38B78D97C87D5D4A3E0706B2079" }, { "b" : "7F740F1DE000", "path" : "/lib/x86_64-linux-gnu/libz.so.1", "elfType" : 3, "buildId" : "908B5A955D0A73FB8D31E0F927D0CDBA810CB300" }, { "b" : "7F740EEC7000", "path" : "/usr/lib/x86_64-linux-gnu/libunistring.so.0", "elfType" : 3, "buildId" : "2E457FF72C4E6A267C0B10E06C3FB8C4F32487EE" }, { "b" : "7F740EB2E000", "path" : "/usr/lib/x86_64-linux-gnu/libgnutls.so.30", "elfType" : 3, "buildId" : "07A8F58A7E4E32A36FEEE7511F728D5896439B13" }, { "b" : "7F740E8F9000", "path" : "/usr/lib/x86_64-linux-gnu/libhogweed.so.4", "elfType" : 3, "buildId" : "1D3666D2FA45541887E96DED01529116996812AD" }, { "b" : "7F740E6C2000", "path" : "/usr/lib/x86_64-linux-gnu/libnettle.so.6", "elfType" : 3, "buildId" : "43D18C6AB6EDE083BE2C5FAA857E379389819ACB" }, { "b" : "7F740E43F000", "path" : "/usr/lib/x86_64-linux-gnu/libgmp.so.10", "elfType" : 3, "buildId" : "45ACF9508A033A2AE2672156491BC524A3BF20CD" }, { "b" : "7F740E12F000", "path" : "/lib/x86_64-linux-gnu/libgcrypt.so.20", "elfType" : 3, "buildId" : "917AB7D78C8C49FE3095ABFF95FAB28575D704BB" }, { "b" : "7F740DF23000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "932297A42269A54BCDB88198BA06BD63B13E1996" }, { "b" : "7F740DD1F000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "3CFF3CE519A16305A617D8885EA5D3AE3D965461" }, { "b" : "7F740DB04000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "A54D193AB95897B4BFE387E6578064711115AB75" }, { "b" : "7F740D89F000", "path" : "/usr/lib/x86_64-linux-gnu/libp11-kit.so.0", "elfType" : 3, "buildId" : "86F00B032B270ED5297EB393B30EDEF76B890573" }, { "b" : "7F740D66B000", "path" : "/lib/x86_64-linux-gnu/libidn.so.11", "elfType" : 3, "buildId" : "CCC0C44563E10F70FCF98D0C7AFABC9801F7159B" }, { "b" : "7F740D458000", "path" : "/usr/lib/x86_64-linux-gnu/libtasn1.so.6", "elfType" : 3, "buildId" : "D03612373D33091A4678A032C5D7341FB56FE7DC" }, { "b" : "7F740D244000", "path" : "/lib/x86_64-linux-gnu/libgpg-error.so.0", "elfType" : 3, "buildId" : "8B9D1F17D242A08FEA23AF32055037569A714209" }, { "b" : "7F740D03B000", "path" : "/usr/lib/x86_64-linux-gnu/libffi.so.6", "elfType" : 3, "buildId" : "AA1401F42D517693444B96C5774A62D4E8C84A35" } ] }}
 mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55fc27119b81]
 mongod(+0x23B7D99) [0x55fc27118d99]
 mongod(+0x23B827D) [0x55fc2711927d]
 libpthread.so.0(+0x110C0) [0x7f74116d00c0]
 libc.so.6(gsignal+0xCF) [0x7f7411352fff]
 libc.so.6(abort+0x16A) [0x7f741135442a]
 mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x55fc257654a3]
 mongod(+0xAF6F56) [0x55fc25857f56]
 mongod(+0xB67CD9) [0x55fc258c8cd9]
 mongod(__wt_err_func+0x90) [0x55fc256f1424]
 mongod(__wt_panic+0x3F) [0x55fc256f1844]
 mongod(+0xB323B3) [0x55fc258933b3]
 mongod(__wt_log_force_sync+0x111) [0x55fc25956661]
 mongod(__wt_txn_checkpoint_log+0x3D1) [0x55fc258de2e1]
 mongod(+0xB7B01B) [0x55fc258dc01b]
 mongod(__wt_txn_checkpoint+0x1C3) [0x55fc258dc973]
 mongod(+0xB6118A) [0x55fc258c218a]
 mongod(__wt_txn_recover+0x7C4) [0x55fc258e1b94]
 mongod(__wt_connection_workers+0x37) [0x55fc2586d027]
 mongod(wiredtiger_open+0x1A64) [0x55fc25869c94]
 mongod(_ZN5mongo18WiredTigerKVEngineC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mbbbb+0x675) [0x55fc25835b65]
 mongod(+0xAB8089) [0x55fc25819089]
 mongod(_ZN5mongo23initializeStorageEngineEPNS_14ServiceContextENS_22StorageEngineInitFlagsE+0xBF5) [0x55fc25e63185]
 mongod(+0x988FF6) [0x55fc256e9ff6]
 mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x873) [0x55fc257d0013]
 mongod(main+0x9) [0x55fc25767039]
 libc.so.6(__libc_start_main+0xF1) [0x7f74113402e1]
 mongod(_start+0x2A) [0x55fc257ceaca]
-----  END BACKTRACE  -----

@dwitzig
Copy link
Author

dwitzig commented Sep 27, 2018

I used helm install stable/mongodb --set securityContext.fsGroup=0,securityContext.runAsUser=0

@tompizmor
Copy link
Member

I found a similar issue using other mongodb image and it looks a MongoDB limitation:
From the mongodb page:

IMPORTANT

MongoDB requires a filesystem that supports fsync() on directories. For example, HGFS and Virtual >Box’s shared folders do not support this operation.

Check this issue for more info: mvertes/docker-alpine-mongo#1

@lagorsse
Copy link

lagorsse commented Nov 19, 2018

Same issue here. I run mongo in a docker in windows without issue, so there is definitely an issue here.
I tried this install stable/mongodb --set securityContext.fsGroup=0,securityContext.runAsUser=0, I got the same result.

@stale
Copy link

stale bot commented Dec 4, 2018

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@stale stale bot added the stale 15 days without activity label Dec 4, 2018
@carrodher carrodher added on-hold Issues or Pull Requests with this label will never be considered stale and removed stale 15 days without activity labels Dec 5, 2018
@tompizmor
Copy link
Member

@lagorsse and did you try disabling the persistence completely?

helm install stable/mongodb --set persistence.enabled=false

@Bessonov
Copy link

Bessonov commented Feb 6, 2019

@tompizmor have same issue with rook/ceph. No problem with disabled persistence or run mongodb 3.4 as root.

@tompizmor
Copy link
Member

Hi @Bessonov,

It seems rook mount the volumes owned by root:root with drwxr-x--- permissions so any unpriviledge container can't write to the volume. This issue is already created in the root repository (rook/rook#1921)

@TheHalcyonSavant
Copy link

I'm also getting:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb/data'
on docker-for-desktop (Windows 10) with the default installation.

@tompizmor
Copy link
Member

@TheHalcyonSavant I am not sure if it has been already mentioned in this issue, but the problem happens because Docker for Windows doesn't support mounting volumes with specific ownerships or permissions. Hence, as the mongodb image is non-root, it cannot write to the mouted volume.

Another user had a similar issue with the Postgres chart and wrote this comment that may help understand the issue.

@carrodher carrodher removed the on-hold Issues or Pull Requests with this label will never be considered stale label May 31, 2019
@Learath
Copy link

Learath commented Oct 12, 2019

--set persistence.enabled=false

This same issue and fix applies to https://github.com/bitnami/charts/tree/master/bitnami/etcd as well.

@tompizmor
Copy link
Member

Correct! This issue affects all helm chart whose main image run as non-root user and writes data into a volume.

@stale
Copy link

stale bot commented Oct 29, 2019

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@stale stale bot added the stale 15 days without activity label Oct 29, 2019
@stale
Copy link

stale bot commented Nov 3, 2019

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@TBBle
Copy link

TBBle commented Mar 25, 2020

I suspect this particular problem was solved by docker/for-win#5665 in Docker Desktop for Windows 2.2.0.4, which keeps the hostmounts on the Linux VM instead of passing through to Windows.

@carrodher
Copy link
Member

Thanks for letting us know!

@eevleevs
Copy link

I have a similar issue on Azure. I checked that mongo runs with user id 1238, but neither

securityContext.fsGroup=0,securityContext.runAsUser=0

nor

securityContext.fsGroup=1238,securityContext.runAsUser=1238

helped. The mount is still not writable. Any other suggestion?

@juan131
Copy link
Contributor

juan131 commented Oct 27, 2020

Hi @eevleevs

Are you able to reproduce the issue when disabling persistence? It's very likely that the issue is related with the type of PersitentVolume(s) you are mounting on MongoDB. Are you using any special StorageClass?

@eevleevs
Copy link

Disabling persistence it works. The issue is with the mount not being writable.

I tried using default and cet-store classes, no difference.

Also, if I run a bitnami/mongodb container directly (not with the chart) and change the user to 1238, I can write to persistent storage.

@juan131
Copy link
Contributor

juan131 commented Oct 28, 2020

Hi @eevleevs

Please take a look to the Troubleshooting guide below we recently created:

And give a try to the solution suggested on the guide.

@eevleevs
Copy link

If I use only volumePermissions.enabled=true volume-permissions container fails with:

chown: changing ownership of '/bitnami/mongodb': Operation not permitted

If I use

volumePermissions.enabled=true
volumePermissions.securityContext.runAsUser=0
volumePermissions.securityContext.fsGroup=0

then volume-permissions works, but mongodb container fails with:

mongodb 10:08:58.23 
28/10/2020 11:08:58 mongodb 10:08:58.23 Welcome to the Bitnami mongodb container
28/10/2020 11:08:58 mongodb 10:08:58.23 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
28/10/2020 11:08:58 mongodb 10:08:58.24 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
28/10/2020 11:08:58 mongodb 10:08:58.24 
28/10/2020 11:08:58 mongodb 10:08:58.24 INFO  ==> ** Starting MongoDB setup **
28/10/2020 11:08:58 mongodb 10:08:58.25 INFO  ==> Validating settings in MONGODB_* env vars...
28/10/2020 11:08:58 mongodb 10:08:58.45 INFO  ==> Initializing MongoDB...
28/10/2020 11:08:58 mongodb 10:08:58.62 INFO  ==> Enabling authentication...
28/10/2020 11:08:58 mongodb 10:08:58.63 INFO  ==> Deploying MongoDB with persisted data...
28/10/2020 11:08:58 mongodb 10:08:58.64 INFO  ==> ** MongoDB setup finished! **
28/10/2020 11:08:58 
28/10/2020 11:08:58 mongodb 10:08:58.65 INFO  ==> ** Starting MongoDB **
28/10/2020 11:08:58 
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.678+00:00"},"s":"W",  "c":"CONTROL",  "id":20698,   "ctx":"main","msg":"***** SERVER RESTARTED *****","tags":["startupWarnings"]}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.680+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.682+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.682+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.683+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-plfll-797cfb7cbc-vfqc8"}}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.683+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1d  10 Sep 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.683+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 4.15.0-1098-azure"}}}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.683+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"security":{"authorization":"enabled"},"setParameter":{"enableLocalhostAuthBypass":"false"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
28/10/2020 11:08:58 {"t":{"$date":"2020-10-28T10:08:58.888+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=15572M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
28/10/2020 11:09:00 {"t":{"$date":"2020-10-28T10:09:00.098+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1603879740:98629][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: File exists"}}
28/10/2020 11:09:01 {"t":{"$date":"2020-10-28T10:09:01.564+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.15"}}
28/10/2020 11:09:01 {"t":{"$date":"2020-10-28T10:09:01.663+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1603879741:663129][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
28/10/2020 11:09:03 {"t":{"$date":"2020-10-28T10:09:03.981+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1603879743:981920][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: File exists"}}
28/10/2020 11:09:07 {"t":{"$date":"2020-10-28T10:09:07.649+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.16"}}
28/10/2020 11:09:08 {"t":{"$date":"2020-10-28T10:09:08.047+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1603879748:47706][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
28/10/2020 11:09:09 {"t":{"$date":"2020-10-28T10:09:09.542+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1603879749:542521][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: File exists"}}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.049+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.17"}}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.293+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1603879752:293259][1:0x7f47a3043100], connection: __posix_open_file, 806: /bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.421+00:00"},"s":"W",  "c":"STORAGE",  "id":22347,   "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.421+00:00"},"s":"F",  "c":"STORAGE",  "id":28595,   "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"1: Operation not permitted"}}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.421+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":1101}}
28/10/2020 11:09:12 {"t":{"$date":"2020-10-28T10:09:12.421+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

@juan131
Copy link
Contributor

juan131 commented Oct 29, 2020

Hi @eevleevs

You obtained the error below:

Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.

It looks like you are installing the chart but you didn't remove exiting PVCs from previous deployments (see https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#persistence-volumes-pvs-retained-from-previous-releases). Could you please uninstall your release, remove the associated PVCs, and install it again setting volumePermissions.enabled=true?

@eevleevs
Copy link

This is not the case, I am always using a different release name. And anyway the PVC are actually removed automatically here.
Any other idea?

@juan131
Copy link
Contributor

juan131 commented Oct 30, 2020

Hi @eevleevs

But your logs say:

28/10/2020 11:08:58 mongodb 10:08:58.63 INFO  ==> Deploying MongoDB with persisted data...

So.. It's possible that you're not sharing the logs from the 1st container boot. I mean, the 1st time the MongoDB is created before it's restarted and enters in the CrashLoopBackOff state. Could you please share these logs instead? Please also set image.debug=true to increase the verbosity in the container logs.

@eevleevs
Copy link

Sorry, cannot see any difference.

@juan131
Copy link
Contributor

juan131 commented Nov 3, 2020

Sorry, cannot see any difference.

@eevleevs what do you mean?

You should obtain in the logs:

mongodb 08:37:53.60 INFO  ==> ** Starting MongoDB setup **
mongodb 08:37:53.61 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 08:37:53.64 INFO  ==> Initializing MongoDB...
mongodb 08:37:53.65 INFO  ==> Enabling authentication...
mongodb 08:37:53.66 INFO  ==> Deploying MongoDB from scratch...
...

Instead of:

mongodb 10:08:58.24 INFO  ==> ** Starting MongoDB setup **
mongodb 10:08:58.25 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 10:08:58.45 INFO  ==> Initializing MongoDB...
mongodb 10:08:58.62 INFO  ==> Enabling authentication...
mongodb 10:08:58.63 INFO  ==> Deploying MongoDB with persisted data...
...

@eevleevs
Copy link

eevleevs commented Nov 9, 2020

I am always seeing Deploying MongoDB with persisted data, even inspecting the previous container's logs right after the first failure. Maybe I am not fast enough? Any way to increase the backoff?

@FraPazGal
Copy link
Contributor

Hi @eevleevs,

AFAIK, logs cannot retrieved past the previous to the running one. Could you try kubectl get events <podname>?

I can't seem to reproduce the issue, have you changed any other parameter besides volumePermissions.enabled=true?

@eevleevs
Copy link

Does not like it with <podname>, here is what it looks without:

> kubectl.exe get events
LAST SEEN   TYPE      REASON                  OBJECT                          MESSAGE
6m13s       Warning   FailedScheduling        pod/mongodb-7fdd648698-76jkh    pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
6m10s       Normal    Scheduled               pod/mongodb-7fdd648698-76jkh    Successfully assigned mongodb/mongodb-7fdd648698-76jkh to westeuropek8swork1
6m7s        Normal    Pulling                 pod/mongodb-7fdd648698-76jkh    Pulling image "docker.io/bitnami/minideb:buster"
6m6s        Normal    Pulled                  pod/mongodb-7fdd648698-76jkh    Successfully pulled image "docker.io/bitnami/minideb:buster"
6m5s        Normal    Created                 pod/mongodb-7fdd648698-76jkh    Created container volume-permissions
6m5s        Normal    Started                 pod/mongodb-7fdd648698-76jkh    Started container volume-permissions
5m7s        Normal    Pulled                  pod/mongodb-7fdd648698-76jkh    Container image "docker.io/bitnami/mongodb:4.4.1-debian-10-r39" already present on machine
5m6s        Normal    Created                 pod/mongodb-7fdd648698-76jkh    Created container mongodb
5m6s        Normal    Started                 pod/mongodb-7fdd648698-76jkh    Started container mongodb
66s         Warning   BackOff                 pod/mongodb-7fdd648698-76jkh    Back-off restarting failed container
6m27s       Normal    SuccessfulCreate        replicaset/mongodb-7fdd648698   Created pod: mongodb-7fdd648698-76jkh
6m27s       Normal    ScalingReplicaSet       deployment/mongodb              Scaled up replica set mongodb-7fdd648698 to 1
6m13s       Normal    ProvisioningSucceeded   persistentvolumeclaim/mongodb   Successfully provisioned volume pvc-c79d0502-22f5-11eb-8edd-000d3a29227a using kubernetes.io/azure-file

Did not change anything but

volumePermissions.enabled=true
image.debug=true

@FraPazGal
Copy link
Contributor

Hi @eevleevs,

Could you paste the kubectl describe pod <podname> here please? I'm not seeing the SuccessfulAttachVolume` event so there may be a problem with the pvc. Also, could you please try to obtain the logs of the pods before its first restart?

@HassanHashemi
Copy link

In my case the replica set key was not long enough.

@juan131
Copy link
Contributor

juan131 commented Mar 15, 2021

Thanks for sharing your insights @HassanHashemi
I guess we could include a warning in the chart that prevents users from installing it when the key is not long enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale 15 days without activity
Projects
None yet
Development

No branches or pull requests