+ date Thu Jul 19 20:10:32 UTC 2018 + kubectl logs ceph-server -n e2e-tests-csi-mock-plugin-9sbdt monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to 3cc57754-8c43-4f67-b7bb-ee0a564141d3 monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors) ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2018-07-19 20:08:22.459501 7fa84fdc1780 -1 WARNING: 'mon addr' config option 172.17.0.3:0/0 does not match monmap file continuing with monmap configuration got crush map from osdmap epoch 1 set crush map 0 2018-07-19 20:08:26.926089 7fbd529f8940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:27.324433 7fbd529f8940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:27.325542 7fbd529f8940 -1 filestore(/var/lib/ceph/osd/ceph-0) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory 2018-07-19 20:08:28.019405 7fbd529f8940 -1 created object store /var/lib/ceph/osd/ceph-0 for osd.0 fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:28.019502 7fbd529f8940 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2018-07-19 20:08:28.020161 7fbd529f8940 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring added key for osd.0 add item id 0 name 'osd.0' weight 1 at location {host=cephbox,root=default} to crush map starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal 1 2018-07-19 20:08:32.395962 7efdb17f9940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:32.856920 7efdb17f9940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:32.858500 7efdb17f9940 -1 filestore(/var/lib/ceph/osd/ceph-1) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory 2018-07-19 20:08:34.054852 7efdb17f9940 -1 created object store /var/lib/ceph/osd/ceph-1 for osd.1 fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:34.054946 7efdb17f9940 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2018-07-19 20:08:34.055228 7efdb17f9940 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring added key for osd.1 add item id 1 name 'osd.1' weight 1 at location {host=cephbox,root=default} to crush map starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal starting mds.cephfs at :/0 Importing image: 3% complete... Importing image: 6% complete... Importing image: 9% complete... Importing image: 13% complete... Importing image: 16% complete... Importing image: 19% complete... Importing image: 23% complete... Importing image: 26% complete... Importing image: 29% complete... Importing image: 33% complete... Importing image: 36% complete... Importing image: 39% complete... Importing image: 42% complete... Importing image: 46% complete... Importing image: 49% complete... Importing image: 52% complete... Importing image: 56% complete... Importing image: 59% complete... Importing image: 62% complete... Importing image: 66% complete... Importing image: 69% complete... Importing image: 72% complete... Importing image: 76% complete... Importing image: 79% complete... Importing image: 82% complete... Importing image: 85% complete... Importing image: 89% complete... Importing image: 92% complete... Importing image: 95% complete... Importing image: 99% complete... Importing image: 100% complete...done. pool 'cephfs_data' created pool 'cephfs_metadata' created new fs with metadata pool 2 and data pool 1 ceph-fuse[580]: starting ceph client 2018-07-19 20:09:29.559966 7fde94bfdf80 -1 init, newargv = 0x563ec7c49020 newargc=11 ceph-fuse[580]: probably no MDS server is up? ceph-fuse[580]: ceph mount failed with (4) Interrupted system call ceph-fuse[578]: mount failed: (4) Interrupted system call Waiting for cephfs to be up ceph-fuse[603]: starting ceph client 2018-07-19 20:09:30.663756 7faef1bf2f80 -1 init, newargv = 0x557236137020 newargc=11 ceph-fuse[603]: probably no MDS server is up? ceph-fuse[603]: ceph mount failed with (4) Interrupted system call ceph-fuse[601]: mount failed: (4) Interrupted system call Waiting for cephfs to be up ceph-fuse[626]: starting ceph client 2018-07-19 20:09:31.760647 7fd02a522f80 -1 init, newargv = 0x564d079dd020 newargc=11 ceph-fuse[626]: starting fuse Ceph is ready + kubectl logs csi-cephfs-controller-0 -n e2e-tests-csi-mock-plugin-9sbdt -c csi-external-provisioner I0719 20:08:26.027690 1 csi-provisioner.go:70] Building kube configs for running in cluster... I0719 20:08:26.132381 1 controller.go:97] Connecting to /var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock I0719 20:08:26.133000 1 controller.go:124] Still trying, connection is CONNECTING I0719 20:08:26.135131 1 controller.go:121] Connected I0719 20:08:26.140209 1 controller.go:492] Starting provisioner controller 7ef4c7b9-8b8f-11e8-aaf5-0242ac110005! I0719 20:08:26.141048 1 reflector.go:202] Starting reflector *v1.PersistentVolume (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497 I0719 20:08:26.141139 1 reflector.go:240] Listing and watching *v1.PersistentVolume from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497 I0719 20:08:26.141082 1 reflector.go:202] Starting reflector *v1.PersistentVolumeClaim (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496 I0719 20:08:26.141718 1 reflector.go:202] Starting reflector *v1.StorageClass (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498 I0719 20:08:26.141825 1 reflector.go:240] Listing and watching *v1.PersistentVolumeClaim from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496 I0719 20:08:26.141845 1 reflector.go:240] Listing and watching *v1.StorageClass from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498 I0719 20:08:41.192031 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:08:41.193018 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:08:41.195282 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:08:56.192567 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:08:56.193429 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:08:56.195736 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:09:11.193156 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:09:11.193744 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:09:11.196198 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:09:26.193684 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:09:26.194028 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:09:26.196560 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:09:33.079462 1 controller.go:1167] scheduleOperation[lock-provision-e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n[a6d54c82-8b8f-11e8-9d15-525400b854f0]] I0719 20:09:33.248041 1 leaderelection.go:156] attempting to acquire leader lease... I0719 20:09:33.270221 1 leaderelection.go:178] successfully acquired lease to provision for pvc e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n I0719 20:09:33.270216 1 controller.go:988] claim update received: MODIFIED e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n Pending I0719 20:09:33.270727 1 controller.go:1167] scheduleOperation[provision-e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n[a6d54c82-8b8f-11e8-9d15-525400b854f0]] I0719 20:09:33.271022 1 controller.go:836] provisionClaimOperation [e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n] started, class: "e2e-tests-csi-mock-plugin-9sbdt-sc" I0719 20:09:33.275747 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:33.275910 1 controller.go:89] GRPC request: I0719 20:09:33.279108 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:33.279208 1 controller.go:92] GRPC error: I0719 20:09:33.279233 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:33.279245 1 controller.go:89] GRPC request: I0719 20:09:33.280348 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:33.280469 1 controller.go:92] GRPC error: I0719 20:09:33.280504 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:33.280537 1 controller.go:89] GRPC request: I0719 20:09:33.281693 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:33.281830 1 controller.go:92] GRPC error: I0719 20:09:33.300926 1 controller.go:88] GRPC call: /csi.v0.Controller/CreateVolume I0719 20:09:33.301001 1 controller.go:89] GRPC request: name:"pvc-a6d54c828b8f11e8" capacity_range: volume_capabilities: access_mode: > parameters: parameters: parameters: parameters: parameters: controller_create_secrets: controller_create_secrets: I0719 20:09:35.402357 1 controller.go:988] claim update received: MODIFIED e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n Pending I0719 20:09:35.403132 1 leaderelection.go:215] succesfully renewed lease to provision for pvc e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n I0719 20:09:36.753116 1 controller.go:91] GRPC response: volume: attributes: attributes: attributes: attributes: > I0719 20:09:36.753444 1 controller.go:92] GRPC error: I0719 20:09:36.753501 1 controller.go:336] create volume rep: {CapacityBytes:1073741824 Id:csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004 Attributes:map[monitors:172.17.0.3 pool:cephfs_data provisionVolume:true csiProvisionerSecretName:cephfs-secret csiProvisionerSecretNamespace:e2e-tests-csi-mock-plugin-9sbdt]} I0719 20:09:36.753681 1 controller.go:386] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:csi-cephfsplugin,VolumeHandle:csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csiProvisionerSecretName: cephfs-secret,csiProvisionerSecretNamespace: e2e-tests-csi-mock-plugin-9sbdt,monitors: 172.17.0.3,pool: cephfs_data,provisionVolume: true,storage.kubernetes.io/csiProvisionerIdentity: 1532030906132-8081-csi-cephfsplugin,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:&SecretReference{Name:cephfs-secret,Namespace:e2e-tests-csi-mock-plugin-9sbdt,},}} I0719 20:09:36.754165 1 controller.go:900] volume "pvc-a6d54c828b8f11e8" for claim "e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n" created I0719 20:09:36.758304 1 controller.go:914] provisionClaimOperation [e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n]: trying to save volume pvc-a6d54c828b8f11e8 I0719 20:09:36.779096 1 controller.go:917] volume "pvc-a6d54c828b8f11e8" for claim "e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n" saved I0719 20:09:36.779159 1 controller.go:953] volume "pvc-a6d54c828b8f11e8" provisioned for claim "e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n" I0719 20:09:36.813520 1 controller.go:1010] claim event received: ADDED e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n.1542de0e6bd19a9b e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n ProvisioningSucceeded I0719 20:09:37.140924 1 controller.go:1167] scheduleOperation[delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]] I0719 20:09:37.141069 1 controller.go:1115] deleteVolumeOperation [pvc-a6d54c828b8f11e8] started I0719 20:09:37.149509 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:37.149584 1 controller.go:89] GRPC request: I0719 20:09:37.151070 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:37.151196 1 controller.go:92] GRPC error: I0719 20:09:37.151247 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:37.151273 1 controller.go:89] GRPC request: I0719 20:09:37.152434 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:37.152538 1 controller.go:92] GRPC error: I0719 20:09:37.152576 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:37.152632 1 controller.go:89] GRPC request: I0719 20:09:37.153957 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:37.154146 1 controller.go:92] GRPC error: I0719 20:09:37.341850 1 controller.go:88] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:37.341901 1 controller.go:89] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:09:37.407879 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist I0719 20:09:37.752113 1 controller.go:91] GRPC response: I0719 20:09:37.752293 1 controller.go:92] GRPC error: rpc error: code = Internal desc = cephfs: ceph failed with following error: exit status 1 cephfs: ceph output: 2018-07-19 20:09:37.726129 7f793f480700 0 librados: client.admin authentication error (95) Operation not supported [errno 95] error connecting to the cluster E0719 20:09:37.752379 1 controller.go:1138] Deletion of volume "pvc-a6d54c828b8f11e8" failed: rpc error: code = Internal desc = cephfs: ceph failed with following error: exit status 1 cephfs: ceph output: 2018-07-19 20:09:37.726129 7f793f480700 0 librados: client.admin authentication error (95) Operation not supported [errno 95] error connecting to the cluster E0719 20:09:37.752650 1 goroutinemap.go:165] Operation for "delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]" failed. No retries permitted until 2018-07-19 20:09:38.252518432 +0000 UTC m=+72.303588172 (durationBeforeRetry 500ms). Error: "rpc error: code = Internal desc = cephfs: ceph failed with following error: exit status 1\ncephfs: ceph output: 2018-07-19 20:09:37.726129 7f793f480700 0 librados: client.admin authentication error (95) Operation not supported\n[errno 95] error connecting to the cluster\n" E0719 20:09:39.416560 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist I0719 20:09:41.194113 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:09:41.194268 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:09:41.196842 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:09:41.196932 1 controller.go:1167] scheduleOperation[delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]] I0719 20:09:41.196978 1 controller.go:1115] deleteVolumeOperation [pvc-a6d54c828b8f11e8] started I0719 20:09:41.200576 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:41.200646 1 controller.go:89] GRPC request: I0719 20:09:41.201508 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:41.201580 1 controller.go:92] GRPC error: I0719 20:09:41.201610 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:41.201634 1 controller.go:89] GRPC request: I0719 20:09:41.202252 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:41.202297 1 controller.go:92] GRPC error: I0719 20:09:41.202315 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:41.202321 1 controller.go:89] GRPC request: I0719 20:09:41.202755 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:41.202824 1 controller.go:92] GRPC error: I0719 20:09:41.207339 1 controller.go:88] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:41.207424 1 controller.go:89] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: I0719 20:09:41.215170 1 controller.go:91] GRPC response: I0719 20:09:41.215285 1 controller.go:92] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:41.215379 1 controller.go:1138] Deletion of volume "pvc-a6d54c828b8f11e8" failed: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:41.215616 1 goroutinemap.go:165] Operation for "delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]" failed. No retries permitted until 2018-07-19 20:09:42.215545058 +0000 UTC m=+76.266614844 (durationBeforeRetry 1s). Error: "rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16\ncephfs: mount output: mount error 16 = Device or resource busy\n" E0719 20:09:41.444966 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0719 20:09:43.438631 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0719 20:09:45.452546 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist E0719 20:09:45.456399 1 leaderelection.go:244] error initially creating leader election record: create not allowed, PVC should already exist I0719 20:09:45.456485 1 leaderelection.go:219] failed to renew lease to provision for pvc e2e-tests-csi-mock-plugin-9sbdt/pvc-tw42n I0719 20:09:56.194655 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:09:56.194684 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:09:56.197205 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:09:56.197345 1 controller.go:1167] scheduleOperation[delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]] I0719 20:09:56.197421 1 controller.go:1115] deleteVolumeOperation [pvc-a6d54c828b8f11e8] started I0719 20:09:56.414293 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:56.414363 1 controller.go:89] GRPC request: I0719 20:09:56.415661 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:56.415830 1 controller.go:92] GRPC error: I0719 20:09:56.415888 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:56.415923 1 controller.go:89] GRPC request: I0719 20:09:56.417730 1 controller.go:91] GRPC response: capabilities: > I0719 20:09:56.417884 1 controller.go:92] GRPC error: I0719 20:09:56.417937 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:56.417960 1 controller.go:89] GRPC request: I0719 20:09:56.419140 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:56.419239 1 controller.go:92] GRPC error: I0719 20:09:56.428497 1 controller.go:88] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:56.428571 1 controller.go:89] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: I0719 20:09:56.443402 1 controller.go:91] GRPC response: I0719 20:09:56.443483 1 controller.go:92] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:56.443524 1 controller.go:1138] Deletion of volume "pvc-a6d54c828b8f11e8" failed: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:56.443708 1 goroutinemap.go:165] Operation for "delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]" failed. No retries permitted until 2018-07-19 20:09:58.443650479 +0000 UTC m=+92.494720225 (durationBeforeRetry 2s). Error: "rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16\ncephfs: mount output: mount error 16 = Device or resource busy\n" I0719 20:10:11.195050 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:10:11.195095 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:10:11.197625 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:10:11.197816 1 controller.go:1167] scheduleOperation[delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]] I0719 20:10:11.197890 1 controller.go:1115] deleteVolumeOperation [pvc-a6d54c828b8f11e8] started I0719 20:10:11.209495 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:10:11.209553 1 controller.go:89] GRPC request: I0719 20:10:11.210858 1 controller.go:91] GRPC response: capabilities: > I0719 20:10:11.210949 1 controller.go:92] GRPC error: I0719 20:10:11.210988 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:10:11.211009 1 controller.go:89] GRPC request: I0719 20:10:11.212018 1 controller.go:91] GRPC response: capabilities: > I0719 20:10:11.212157 1 controller.go:92] GRPC error: I0719 20:10:11.212266 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:10:11.212436 1 controller.go:89] GRPC request: I0719 20:10:11.213655 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:10:11.213889 1 controller.go:92] GRPC error: I0719 20:10:11.220327 1 controller.go:88] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:10:11.220372 1 controller.go:89] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: I0719 20:10:11.233488 1 controller.go:91] GRPC response: I0719 20:10:11.233547 1 controller.go:92] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:11.233577 1 controller.go:1138] Deletion of volume "pvc-a6d54c828b8f11e8" failed: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:11.233717 1 goroutinemap.go:165] Operation for "delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]" failed. No retries permitted until 2018-07-19 20:10:15.233663317 +0000 UTC m=+109.284733050 (durationBeforeRetry 4s). Error: "rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16\ncephfs: mount output: mount error 16 = Device or resource busy\n" I0719 20:10:26.195417 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync I0719 20:10:26.195626 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync I0719 20:10:26.197962 1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync I0719 20:10:26.198084 1 controller.go:1167] scheduleOperation[delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]] I0719 20:10:26.198147 1 controller.go:1115] deleteVolumeOperation [pvc-a6d54c828b8f11e8] started I0719 20:10:26.208245 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:10:26.208284 1 controller.go:89] GRPC request: I0719 20:10:26.209494 1 controller.go:91] GRPC response: capabilities: > I0719 20:10:26.209583 1 controller.go:92] GRPC error: I0719 20:10:26.209637 1 controller.go:88] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:10:26.209655 1 controller.go:89] GRPC request: I0719 20:10:26.210508 1 controller.go:91] GRPC response: capabilities: > I0719 20:10:26.210588 1 controller.go:92] GRPC error: I0719 20:10:26.210631 1 controller.go:88] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:10:26.210649 1 controller.go:89] GRPC request: I0719 20:10:26.211853 1 controller.go:91] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:10:26.211938 1 controller.go:92] GRPC error: I0719 20:10:26.218468 1 controller.go:88] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:10:26.218510 1 controller.go:89] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: I0719 20:10:26.234592 1 controller.go:91] GRPC response: I0719 20:10:26.234675 1 controller.go:92] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:26.234711 1 controller.go:1138] Deletion of volume "pvc-a6d54c828b8f11e8" failed: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:26.234864 1 goroutinemap.go:165] Operation for "delete-pvc-a6d54c828b8f11e8[a90e8d95-8b8f-11e8-9d15-525400b854f0]" failed. No retries permitted until 2018-07-19 20:10:34.23480858 +0000 UTC m=+128.285878313 (durationBeforeRetry 8s). Error: "rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16\ncephfs: mount output: mount error 16 = Device or resource busy\n" + kubectl logs csi-cephfs-node-jkxxx -n e2e-tests-csi-mock-plugin-9sbdt -c cephfs-driver I0719 20:08:25.166488 1 driver.go:80] Driver: csi-cephfsplugin version: 0.2.0 I0719 20:08:25.187686 1 driver.go:103] cephfs: setting default volume mounter to fuse I0719 20:08:25.188113 1 driver.go:80] Enabling controller service capability: CREATE_DELETE_VOLUME I0719 20:08:25.188219 1 driver.go:92] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I0719 20:08:25.189527 1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock", Net:"unix"} I0719 20:08:26.216188 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:08:26.216301 1 utils.go:97] GRPC request: I0719 20:08:26.216393 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:08:26.216431 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:08:26.219185 1 utils.go:96] GRPC call: /csi.v0.Node/NodeGetId I0719 20:08:26.219255 1 utils.go:97] GRPC request: I0719 20:08:26.219347 1 nodeserver-default.go:40] Using default NodeGetId I0719 20:08:26.219370 1 utils.go:102] GRPC response: node_id:"127.0.0.1" I0719 20:09:29.060830 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:29.060896 1 utils.go:97] GRPC request: I0719 20:09:29.060951 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:09:29.060989 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:29.063749 1 utils.go:96] GRPC call: /csi.v0.Identity/Probe I0719 20:09:29.063813 1 utils.go:97] GRPC request: I0719 20:09:29.063847 1 utils.go:102] GRPC response: I0719 20:09:29.066006 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:29.066053 1 utils.go:97] GRPC request: I0719 20:09:29.066088 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:29.067923 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:29.067964 1 utils.go:97] GRPC request: I0719 20:09:29.068025 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:09:29.068043 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:33.278277 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:33.278327 1 utils.go:97] GRPC request: I0719 20:09:33.278363 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:33.279802 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:33.279847 1 utils.go:97] GRPC request: I0719 20:09:33.279880 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:09:33.279897 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:33.281141 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:33.281173 1 utils.go:97] GRPC request: I0719 20:09:33.281201 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:09:33.281218 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:33.302550 1 utils.go:96] GRPC call: /csi.v0.Controller/CreateVolume I0719 20:09:33.302623 1 utils.go:97] GRPC request: name:"pvc-a6d54c828b8f11e8" capacity_range: volume_capabilities: access_mode: > parameters: parameters: parameters: parameters: parameters: controller_create_secrets: controller_create_secrets: I0719 20:09:36.750668 1 controllerserver.go:105] cephfs: volume csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004 successfuly created I0719 20:09:36.751358 1 utils.go:102] GRPC response: volume: attributes: attributes: attributes: attributes: > I0719 20:09:37.150404 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:37.150490 1 utils.go:97] GRPC request: I0719 20:09:37.150535 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:37.151874 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:37.151942 1 utils.go:97] GRPC request: I0719 20:09:37.151993 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:09:37.152011 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:37.153237 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:37.153290 1 utils.go:97] GRPC request: I0719 20:09:37.153337 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:09:37.153358 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:37.342822 1 utils.go:96] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:37.342867 1 utils.go:97] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:09:37.751207 1 controllerserver.go:171] failed to delete ceph user csi-user-a6fdc080-8b8f-11e8-b41c-0242ac110004: cephfs: ceph failed with following error: exit status 1 cephfs: ceph output: 2018-07-19 20:09:37.726129 7f793f480700 0 librados: client.admin authentication error (95) Operation not supported [errno 95] error connecting to the cluster E0719 20:09:37.751315 1 utils.go:100] GRPC error: rpc error: code = Internal desc = cephfs: ceph failed with following error: exit status 1 cephfs: ceph output: 2018-07-19 20:09:37.726129 7f793f480700 0 librados: client.admin authentication error (95) Operation not supported [errno 95] error connecting to the cluster I0719 20:09:41.201107 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:41.201159 1 utils.go:97] GRPC request: I0719 20:09:41.201198 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:41.201933 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:41.201961 1 utils.go:97] GRPC request: I0719 20:09:41.201989 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:09:41.202005 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:41.202520 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:41.202539 1 utils.go:97] GRPC request: I0719 20:09:41.202555 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:09:41.202561 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:41.208034 1 utils.go:96] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:41.208070 1 utils.go:97] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:09:41.214692 1 controllerserver.go:166] failed to delete volume csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004: cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:41.214796 1 utils.go:100] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy I0719 20:09:56.415058 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:09:56.415127 1 utils.go:97] GRPC request: I0719 20:09:56.415162 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:56.416530 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:09:56.416677 1 utils.go:97] GRPC request: I0719 20:09:56.416833 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:09:56.416961 1 utils.go:102] GRPC response: capabilities: > I0719 20:09:56.418675 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:09:56.418710 1 utils.go:97] GRPC request: I0719 20:09:56.418737 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:09:56.418750 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:09:56.429480 1 utils.go:96] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:09:56.429548 1 utils.go:97] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:09:56.442655 1 controllerserver.go:166] failed to delete volume csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004: cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:09:56.442905 1 utils.go:100] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy I0719 20:10:11.210235 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:10:11.210283 1 utils.go:97] GRPC request: I0719 20:10:11.210321 1 utils.go:102] GRPC response: capabilities: > I0719 20:10:11.211386 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:10:11.211422 1 utils.go:97] GRPC request: I0719 20:10:11.211454 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:10:11.211465 1 utils.go:102] GRPC response: capabilities: > I0719 20:10:11.213091 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:10:11.213125 1 utils.go:97] GRPC request: I0719 20:10:11.213160 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:10:11.213173 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:10:11.220891 1 utils.go:96] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:10:11.220937 1 utils.go:97] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:10:11.233026 1 controllerserver.go:166] failed to delete volume csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004: cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:11.233075 1 utils.go:100] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy I0719 20:10:26.209058 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginCapabilities I0719 20:10:26.209103 1 utils.go:97] GRPC request: I0719 20:10:26.209141 1 utils.go:102] GRPC response: capabilities: > I0719 20:10:26.210104 1 utils.go:96] GRPC call: /csi.v0.Controller/ControllerGetCapabilities I0719 20:10:26.210138 1 utils.go:97] GRPC request: I0719 20:10:26.210167 1 controllerserver-default.go:82] Using default ControllerGetCapabilities I0719 20:10:26.210179 1 utils.go:102] GRPC response: capabilities: > I0719 20:10:26.211318 1 utils.go:96] GRPC call: /csi.v0.Identity/GetPluginInfo I0719 20:10:26.211364 1 utils.go:97] GRPC request: I0719 20:10:26.211409 1 identityserver-default.go:32] Using default GetPluginInnfo I0719 20:10:26.211446 1 utils.go:102] GRPC response: name:"csi-cephfsplugin" vendor_version:"0.2.0" I0719 20:10:26.219057 1 utils.go:96] GRPC call: /csi.v0.Controller/DeleteVolume I0719 20:10:26.219098 1 utils.go:97] GRPC request: volume_id:"csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004" controller_delete_secrets: controller_delete_secrets: E0719 20:10:26.234173 1 controllerserver.go:166] failed to delete volume csi-cephfs-dyn-a6fdc080-8b8f-11e8-b41c-0242ac110004: cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy E0719 20:10:26.234239 1 utils.go:100] GRPC error: rpc error: code = Internal desc = cephfs: mount failed with following error: exit status 16 cephfs: mount output: mount error 16 = Device or resource busy + kubectl exec -it ceph-server -n e2e-tests-csi-mock-plugin-9sbdt -- find /var/log/ceph/ -name '*log' /var/log/ceph/ceph-osd.1.log /var/log/ceph/ceph-mon.a.log /var/log/ceph/ceph-client.admin.log /var/log/ceph/ceph.log /var/log/ceph/ceph-osd.0.log /var/log/ceph/ceph.audit.log /var/log/ceph/ceph-mds.cephfs.log + kubectl exec -it ceph-server -n e2e-tests-csi-mock-plugin-9sbdt -- find /var/log/ceph/ -name '*log' -print -exec cat '{}' ';' /var/log/ceph/ceph-osd.1.log 2018-07-19 20:08:31.467434 7efdb17f9940 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 269 2018-07-19 20:08:31.530219 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) mkfs in /var/lib/ceph/osd/ceph-1 2018-07-19 20:08:31.530656 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) mkfs generated fsid fdd4e33b-5dce-4483-8081-3cf328216faa 2018-07-19 20:08:31.627537 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) write_version_stamp 4 2018-07-19 20:08:32.026117 7efdb17f9940 0 filestore(/var/lib/ceph/osd/ceph-1) backend generic (magic 0x794c7630) 2018-07-19 20:08:32.395851 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) leveldb db exists/created 2018-07-19 20:08:32.395962 7efdb17f9940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:32.396394 7efdb17f9940 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 9: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:32.396515 7efdb17f9940 0 filestore(/var/lib/ceph/osd/ceph-1) mkjournal created journal on /var/lib/ceph/osd/ceph-1/journal 2018-07-19 20:08:32.553905 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) mkfs done in /var/lib/ceph/osd/ceph-1 2018-07-19 20:08:32.554107 7efdb17f9940 0 filestore(/var/lib/ceph/osd/ceph-1) backend generic (magic 0x794c7630) 2018-07-19 20:08:32.554991 7efdb17f9940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2018-07-19 20:08:32.555020 7efdb17f9940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option 2018-07-19 20:08:32.555078 7efdb17f9940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: splice is supported 2018-07-19 20:08:32.629668 7efdb17f9940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: syncfs(2) syscall fully supported (by glibc and kernel) 2018-07-19 20:08:32.630069 7efdb17f9940 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs 2018-07-19 20:08:32.630651 7efdb17f9940 1 leveldb: Recovering log #3 2018-07-19 20:08:32.856576 7efdb17f9940 1 leveldb: Delete type=0 #3 2018-07-19 20:08:32.856688 7efdb17f9940 1 leveldb: Delete type=3 #2 2018-07-19 20:08:32.856920 7efdb17f9940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:32.856951 7efdb17f9940 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2018-07-19 20:08:32.857726 7efdb17f9940 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 14: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:32.857884 7efdb17f9940 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 14: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:32.858392 7efdb17f9940 1 filestore(/var/lib/ceph/osd/ceph-1) upgrade 2018-07-19 20:08:32.858500 7efdb17f9940 -1 filestore(/var/lib/ceph/osd/ceph-1) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory 2018-07-19 20:08:33.998663 7efdb17f9940 1 journal close /var/lib/ceph/osd/ceph-1/journal 2018-07-19 20:08:34.054852 7efdb17f9940 -1 created object store /var/lib/ceph/osd/ceph-1 for osd.1 fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:34.054946 7efdb17f9940 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2018-07-19 20:08:34.055228 7efdb17f9940 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2018-07-19 20:08:36.486822 7f29f2c1c940 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 333 2018-07-19 20:08:36.489365 7f29f2c1c940 0 pidfile_write: ignore empty --pid-file 2018-07-19 20:08:36.582492 7f29f2c1c940 0 filestore(/var/lib/ceph/osd/ceph-1) backend generic (magic 0x794c7630) 2018-07-19 20:08:36.583194 7f29f2c1c940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2018-07-19 20:08:36.583260 7f29f2c1c940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option 2018-07-19 20:08:36.583328 7f29f2c1c940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: splice is supported 2018-07-19 20:08:36.638105 7f29f2c1c940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-1) detect_features: syncfs(2) syscall fully supported (by glibc and kernel) 2018-07-19 20:08:36.709731 7f29f2c1c940 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs 2018-07-19 20:08:36.710742 7f29f2c1c940 1 leveldb: Recovering log #5 2018-07-19 20:08:36.710899 7f29f2c1c940 1 leveldb: Level-0 table #7: started 2018-07-19 20:08:36.818588 7f29f2c1c940 1 leveldb: Level-0 table #7: 146 bytes OK 2018-07-19 20:08:37.021117 7f29f2c1c940 1 leveldb: Delete type=0 #5 2018-07-19 20:08:37.021301 7f29f2c1c940 1 leveldb: Delete type=3 #4 2018-07-19 20:08:37.021720 7f29f2c1c940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:37.021819 7f29f2c1c940 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2018-07-19 20:08:37.022418 7f29f2c1c940 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 18: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:37.028848 7f29f2c1c940 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 18: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:37.029713 7f29f2c1c940 1 filestore(/var/lib/ceph/osd/ceph-1) upgrade 2018-07-19 20:08:37.136103 7f29f2c1c940 0 cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan 2018-07-19 20:08:37.144559 7f29f2c1c940 0 cls/hello/cls_hello.cc:305: loading cls_hello 2018-07-19 20:08:37.150342 7f29f2c1c940 0 osd.1 0 crush map has features 2199057072128, adjusting msgr requires for clients 2018-07-19 20:08:37.150379 7f29f2c1c940 0 osd.1 0 crush map has features 2199057072128 was 8705, adjusting msgr requires for mons 2018-07-19 20:08:37.150388 7f29f2c1c940 0 osd.1 0 crush map has features 2199057072128, adjusting msgr requires for osds 2018-07-19 20:08:37.150502 7f29f2c1c940 0 osd.1 0 load_pgs 2018-07-19 20:08:37.150583 7f29f2c1c940 0 osd.1 0 load_pgs opened 0 pgs 2018-07-19 20:08:37.150634 7f29f2c1c940 0 osd.1 0 using 0 op queue with priority op cut off at 64. 2018-07-19 20:08:37.153046 7f29f2c1c940 -1 osd.1 0 log_to_monitors {default=true} 2018-07-19 20:08:37.158877 7f29f2c1c940 0 osd.1 0 done with init, starting boot process 2018-07-19 20:08:38.200151 7f29e36f4700 0 osd.1 8 crush map has features 2200130813952, adjusting msgr requires for clients 2018-07-19 20:08:38.200157 7f29e36f4700 0 osd.1 8 crush map has features 2200130813952 was 2199057080833, adjusting msgr requires for mons 2018-07-19 20:08:38.200171 7f29e36f4700 0 osd.1 8 crush map has features 2200130813952, adjusting msgr requires for osds /var/log/ceph/ceph-mon.a.log 2018-07-19 20:08:22.174644 7f10b902e780 1 leveldb: Delete type=3 #1 2018-07-19 20:08:22.254855 7fa84fdc1780 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-mon, pid 20 2018-07-19 20:08:22.256169 7fa84fdc1780 0 pidfile_write: ignore empty --pid-file 2018-07-19 20:08:22.333234 7fa84fdc1780 1 leveldb: Recovering log #3 2018-07-19 20:08:22.333347 7fa84fdc1780 1 leveldb: Level-0 table #5: started 2018-07-19 20:08:22.375185 7fa84fdc1780 1 leveldb: Level-0 table #5: 559 bytes OK 2018-07-19 20:08:22.458860 7fa84fdc1780 1 leveldb: Delete type=0 #3 2018-07-19 20:08:22.459014 7fa84fdc1780 1 leveldb: Delete type=3 #2 2018-07-19 20:08:22.459501 7fa84fdc1780 -1 WARNING: 'mon addr' config option 172.17.0.3:0/0 does not match monmap file continuing with monmap configuration 2018-07-19 20:08:22.459651 7fa84fdc1780 0 starting mon.a rank 0 at 172.17.0.3:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:22.460339 7fa84fdc1780 1 mon.a@-1(probing) e0 preinit fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:22.461467 7fa84fdc1780 0 mon.a@-1(probing) e0 my rank is now 0 (was -1) 2018-07-19 20:08:22.461502 7fa84fdc1780 1 mon.a@0(probing) e0 win_standalone_election 2018-07-19 20:08:22.461531 7fa84fdc1780 1 mon.a@0(probing).elector(1) init, last seen epoch 1 2018-07-19 20:08:22.492667 7fa84fdc1780 0 log_channel(cluster) log [INF] : mon.a@0 won leader election with quorum 0 2018-07-19 20:08:22.544213 7fa84fdc1780 1 mon.a@0(leader) e0 apply_quorum_to_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} 2018-07-19 20:08:22.729964 7fa849967700 1 mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 -> 0 2018-07-19 20:08:22.730003 7fa849967700 1 mon.a@0(leader).pg v0 on_upgrade discarding in-core PGMap 2018-07-19 20:08:22.768022 7fa849967700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 -> 0 2018-07-19 20:08:22.768507 7fa849967700 1 mon.a@0(probing) e1 win_standalone_election 2018-07-19 20:08:22.768556 7fa849967700 1 mon.a@0(probing).elector(2) init, last seen epoch 2 2018-07-19 20:08:22.810978 7fa849967700 0 log_channel(cluster) log [INF] : mon.a@0 won leader election with quorum 0 2018-07-19 20:08:22.893484 7fa849967700 0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {a=172.17.0.3:6789/0} 2018-07-19 20:08:22.960441 7fa849967700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 -> 0 2018-07-19 20:08:22.960505 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:23.094097 7fa849967700 0 mon.a@0(leader).mds e1 print_map e1 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} No filesystems configured 2018-07-19 20:08:23.094267 7fa849967700 1 mon.a@0(leader).osd e1 e1: 0 osds: 0 up, 0 in 2018-07-19 20:08:23.152698 7fa849967700 0 mon.a@0(leader).osd e1 crush map has features 2200130813952, adjusting msgr requires 2018-07-19 20:08:23.152707 7fa849967700 0 mon.a@0(leader).osd e1 crush map has features 2200130813952, adjusting msgr requires 2018-07-19 20:08:23.152714 7fa849967700 0 mon.a@0(leader).osd e1 crush map has features 2200130813952, adjusting msgr requires 2018-07-19 20:08:23.152718 7fa849967700 0 mon.a@0(leader).osd e1 crush map has features 2200130813952, adjusting msgr requires 2018-07-19 20:08:23.153454 7fa849967700 1 mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 1 2018-07-19 20:08:23.153891 7fa849967700 0 log_channel(cluster) log [INF] : fsmap e1: 2018-07-19 20:08:23.198286 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e1: 0 osds: 0 up, 0 in 2018-07-19 20:08:23.312668 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v2: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:23.637347 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd getcrushmap"} v 0) v1 2018-07-19 20:08:23.637403 7fa848164700 0 log_channel(audit) log [DBG] : from='client.? 172.17.0.3:0/3745188629' entity='client.admin' cmd=[{"prefix": "osd getcrushmap"}]: dispatch 2018-07-19 20:08:24.286899 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd setcrushmap"} v 0) v1 2018-07-19 20:08:24.286955 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2650023655' entity='client.admin' cmd=[{"prefix": "osd setcrushmap"}]: dispatch 2018-07-19 20:08:24.681152 7fa849967700 1 mon.a@0(leader).osd e2 e2: 0 osds: 0 up, 0 in 2018-07-19 20:08:24.739023 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2650023655' entity='client.admin' cmd='[{"prefix": "osd setcrushmap"}]': finished 2018-07-19 20:08:24.843252 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e2: 0 osds: 0 up, 0 in 2018-07-19 20:08:24.914950 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v3: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:25.297889 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd create"} v 0) v1 2018-07-19 20:08:25.297964 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2350260045' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch 2018-07-19 20:08:26.139104 7fa849967700 1 mon.a@0(leader).osd e3 e3: 1 osds: 0 up, 0 in 2018-07-19 20:08:26.290106 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2350260045' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished 2018-07-19 20:08:26.400318 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e3: 1 osds: 0 up, 0 in 2018-07-19 20:08:26.583861 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v4: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:28.476019 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "auth add", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow rwx"]} v 0) v1 2018-07-19 20:08:28.476073 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/308550940' entity='client.admin' cmd=[{"prefix": "auth add", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow rwx"]}]: dispatch 2018-07-19 20:08:28.627971 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/308550940' entity='client.admin' cmd='[{"prefix": "auth add", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow rwx"]}]': finished 2018-07-19 20:08:29.197172 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 0, "weight": 1.0} v 0) v1 2018-07-19 20:08:29.197274 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/1567463050' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 0, "weight": 1.0}]: dispatch 2018-07-19 20:08:30.065193 7fa849967700 1 mon.a@0(leader).osd e4 e4: 1 osds: 0 up, 0 in 2018-07-19 20:08:30.098926 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/1567463050' entity='client.admin' cmd='[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 0, "weight": 1.0}]': finished 2018-07-19 20:08:30.139316 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e4: 1 osds: 0 up, 0 in 2018-07-19 20:08:30.234740 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v5: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:30.657711 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd create"} v 0) v1 2018-07-19 20:08:30.657853 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2848611712' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch 2018-07-19 20:08:31.335355 7fa849967700 1 mon.a@0(leader).osd e5 e5: 2 osds: 1 up, 1 in 2018-07-19 20:08:31.400961 7fa849967700 0 log_channel(cluster) log [INF] : osd.0 172.17.0.3:6800/177 boot 2018-07-19 20:08:31.401316 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2848611712' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished 2018-07-19 20:08:31.527319 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e5: 2 osds: 1 up, 1 in 2018-07-19 20:08:31.579113 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v6: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:32.702858 7fa849967700 1 mon.a@0(leader).osd e6 e6: 2 osds: 1 up, 1 in 2018-07-19 20:08:32.856254 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e6: 2 osds: 1 up, 1 in 2018-07-19 20:08:32.961344 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v7: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:34.572840 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "auth add", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow rwx"]} v 0) v1 2018-07-19 20:08:34.572914 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/249033197' entity='client.admin' cmd=[{"prefix": "auth add", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow rwx"]}]: dispatch 2018-07-19 20:08:34.760463 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/249033197' entity='client.admin' cmd='[{"prefix": "auth add", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow rwx"]}]': finished 2018-07-19 20:08:35.282036 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 1, "weight": 1.0} v 0) v1 2018-07-19 20:08:35.282123 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2669058995' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 1, "weight": 1.0}]: dispatch 2018-07-19 20:08:36.383745 7fa849967700 1 mon.a@0(leader).osd e7 e7: 2 osds: 1 up, 1 in 2018-07-19 20:08:36.441564 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2669058995' entity='client.admin' cmd='[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 1, "weight": 1.0}]': finished 2018-07-19 20:08:36.516248 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e7: 2 osds: 1 up, 1 in 2018-07-19 20:08:36.600526 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v8: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:36.795035 7fa849967700 0 mon.a@0(leader).mds e2 print_map e2 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} No filesystems configured 2018-07-19 20:08:36.795311 7fa849967700 0 log_channel(cluster) log [INF] : mds.? 172.17.0.3:6808/602994783 up:boot 2018-07-19 20:08:36.795513 7fa849967700 0 log_channel(cluster) log [INF] : fsmap e2:, 1 up:standby 2018-07-19 20:08:37.833374 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v9: 64 pgs: 63 creating+peering, 1 creating; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:37.949251 7fa849967700 1 mon.a@0(leader).osd e8 e8: 2 osds: 2 up, 2 in 2018-07-19 20:08:38.041459 7fa849967700 0 log_channel(cluster) log [INF] : osd.1 172.17.0.3:6804/333 boot 2018-07-19 20:08:38.199857 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e8: 2 osds: 2 up, 2 in 2018-07-19 20:08:38.346924 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v10: 64 pgs: 1 creating, 63 creating+peering; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:39.584566 7fa849967700 1 mon.a@0(leader).osd e9 e9: 2 osds: 2 up, 2 in 2018-07-19 20:08:39.762025 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e9: 2 osds: 2 up, 2 in 2018-07-19 20:08:39.840208 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v11: 64 pgs: 1 creating, 63 creating+peering; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:41.988579 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v12: 64 pgs: 54 creating+peering, 10 creating+activating+undersized+degraded; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:43.348334 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v13: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 53567 MB used, 515 GB / 598 GB avail 2018-07-19 20:08:48.439680 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v14: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 53183 MB used, 515 GB / 598 GB avail 2018-07-19 20:08:53.593053 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v15: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 52791 MB used, 516 GB / 598 GB avail 2018-07-19 20:08:58.594327 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v16: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 52791 MB used, 516 GB / 598 GB avail 2018-07-19 20:08:58.919193 7fa849967700 1 mon.a@0(leader).osd e10 e10: 2 osds: 2 up, 2 in 2018-07-19 20:08:59.181977 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e10: 2 osds: 2 up, 2 in 2018-07-19 20:08:59.312896 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v17: 64 pgs: 53 creating+peering, 2 undersized+degraded+peered, 9 creating+activating+undersized+degraded; 0 bytes data, 52016 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:03.390842 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v18: 64 pgs: 21 creating+peering, 32 active+undersized+degraded, 2 undersized+degraded+peered, 9 creating+activating+undersized+degraded; 0 bytes data, 52016 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:04.929898 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v19: 64 pgs: 41 active+undersized+degraded, 23 creating+activating+undersized+degraded; 0 bytes data, 52017 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:08.305929 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v20: 64 pgs: 41 active+undersized+degraded, 23 creating+activating+undersized+degraded; 0 bytes data, 52017 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:09.702069 7fa849967700 1 mon.a@0(leader).osd e11 e11: 2 osds: 2 up, 2 in 2018-07-19 20:09:09.820079 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e11: 2 osds: 2 up, 2 in 2018-07-19 20:09:09.885596 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v21: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:09.952537 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v22: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:10.905728 7fa849967700 1 mon.a@0(leader).osd e12 e12: 2 osds: 2 up, 2 in 2018-07-19 20:09:11.105978 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e12: 2 osds: 2 up, 2 in 2018-07-19 20:09:11.166565 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v23: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:13.543155 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v24: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:15.136082 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v25: 64 pgs: 61 active+undersized+degraded, 3 creating+activating+undersized+degraded; 0 bytes data, 52019 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:21.177830 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v26: 64 pgs: 64 active+undersized+degraded; 24576 kB data, 52051 MB used, 517 GB / 598 GB avail; 1366 B/s rd, 4683 kB/s wr, 5 op/s; 18/27 objects degraded (66.667%) 2018-07-19 20:09:23.374699 7fa848965700 0 mon.a@0(leader).data_health(3) update_stats avail 86% total 299 GB, used 26130 MB, avail 258 GB 2018-07-19 20:09:23.793885 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v27: 64 pgs: 64 active+undersized+degraded; 53248 kB data, 52124 MB used, 516 GB / 598 GB avail; 1748 B/s rd, 6060 kB/s wr, 5 op/s; 32/48 objects degraded (66.667%) 2018-07-19 20:09:24.142932 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_data"} v 0) v1 2018-07-19 20:09:24.143192 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/3493502123' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_data"}]: dispatch 2018-07-19 20:09:25.112053 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v28: 64 pgs: 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 1308 B/s rd, 5889 kB/s wr, 4 op/s; 36/54 objects degraded (66.667%) 2018-07-19 20:09:25.286513 7fa849967700 1 mon.a@0(leader).osd e13 e13: 2 osds: 2 up, 2 in 2018-07-19 20:09:25.354552 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/3493502123' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_data"}]': finished 2018-07-19 20:09:25.436521 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e13: 2 osds: 2 up, 2 in 2018-07-19 20:09:25.480017 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v29: 68 pgs: 4 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 4244 kB/s wr, 2 op/s; 36/54 objects degraded (66.667%) 2018-07-19 20:09:25.876494 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_metadata"} v 0) v1 2018-07-19 20:09:25.876563 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2007993106' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_metadata"}]: dispatch 2018-07-19 20:09:26.644085 7fa849967700 1 mon.a@0(leader).osd e14 e14: 2 osds: 2 up, 2 in 2018-07-19 20:09:26.722573 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2007993106' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_metadata"}]': finished 2018-07-19 20:09:26.781908 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e14: 2 osds: 2 up, 2 in 2018-07-19 20:09:26.850718 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v30: 72 pgs: 8 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 36/54 objects degraded (66.667%) 2018-07-19 20:09:27.283600 7fa848164700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"} v 0) v1 2018-07-19 20:09:27.283713 7fa848164700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2849102185' entity='client.admin' cmd=[{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]: dispatch 2018-07-19 20:09:27.508444 7fa849967700 1 mon.a@0(leader).osd e15 e15: 2 osds: 2 up, 2 in 2018-07-19 20:09:27.636745 7fa849967700 0 log_channel(cluster) log [INF] : osdmap e15: 2 osds: 2 up, 2 in 2018-07-19 20:09:27.703848 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v31: 72 pgs: 8 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 36/54 objects degraded (66.667%) 2018-07-19 20:09:28.438760 7fa849967700 0 mon.a@0(leader).mds e3 print_map e3 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} Filesystem 'cephfs' (1) fs_name cephfs epoch 0 flags 0 created 2018-07-19 20:09:27.441397 modified 2018-07-19 20:09:27.441398 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in up {} failed damaged stopped data_pools 1 metadata_pool 2 inline_data disabled Standby daemons: 4106: 172.17.0.3:6808/602994783 'cephfs' mds.-1.0 up:standby seq 1 2018-07-19 20:09:28.439083 7fa849967700 0 log_channel(audit) log [INF] : from='client.? 172.17.0.3:0/2849102185' entity='client.admin' cmd='[{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]': finished 2018-07-19 20:09:28.439291 7fa849967700 1 mon.a@0(leader).mds e3 adding standby 172.17.0.3:6808/602994783 as mds.0 2018-07-19 20:09:28.505029 7fa849967700 0 log_channel(cluster) log [INF] : fsmap e3: 0/0/1 up, 1 up:standby 2018-07-19 20:09:28.581626 7fa849967700 0 mon.a@0(leader).mds e4 print_map e4 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} Filesystem 'cephfs' (1) fs_name cephfs epoch 4 flags 0 created 2018-07-19 20:09:27.441397 modified 2018-07-19 20:09:27.441398 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in 0 up {0=4106} failed damaged stopped data_pools 1 metadata_pool 2 inline_data disabled 4106: 172.17.0.3:6808/602994783 'cephfs' mds.0.4 up:creating seq 1 2018-07-19 20:09:28.586529 7fa849967700 0 log_channel(cluster) log [INF] : fsmap e4: 1/1/1 up {0=cephfs=up:creating} 2018-07-19 20:09:29.494951 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v32: 72 pgs: 7 creating+activating+undersized+degraded, 65 active+undersized+degraded; 62464 kB data, 52246 MB used, 516 GB / 598 GB avail; 409 kB/s wr, 0 op/s; 38/57 objects degraded (66.667%) 2018-07-19 20:09:31.733592 7fa849967700 0 mon.a@0(leader).mds e5 print_map e5 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} Filesystem 'cephfs' (1) fs_name cephfs epoch 5 flags 0 created 2018-07-19 20:09:27.441397 modified 2018-07-19 20:09:27.441398 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in 0 up {0=4106} failed damaged stopped data_pools 1 metadata_pool 2 inline_data disabled 4106: 172.17.0.3:6808/602994783 'cephfs' mds.0.4 up:active seq 15 2018-07-19 20:09:31.734144 7fa849967700 0 log_channel(cluster) log [INF] : mds.0 172.17.0.3:6808/602994783 up:active 2018-07-19 20:09:31.734377 7fa849967700 0 log_channel(cluster) log [INF] : fsmap e5: 1/1/1 up {0=cephfs=up:active} 2018-07-19 20:09:33.318754 7fa845a59700 0 -- 172.17.0.3:6789/0 >> 172.17.0.4:0/1548690389 pipe(0x561a03035400 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x561a03124000).accept peer addr is really 172.17.0.4:0/1548690389 (socket is 172.17.0.4:60368/0) 2018-07-19 20:09:34.073366 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v33: 72 pgs: 72 active+undersized+degraded; 62472 kB data, 52266 MB used, 516 GB / 598 GB avail; 165 kB/s wr, 7 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:37.355303 7fa845a59700 0 -- 172.17.0.3:6789/0 >> 172.17.0.4:0/1370440833 pipe(0x561a02de6000 sd=22 :6789 s=0 pgs=0 cs=0 l=0 c=0x561a03124180).accept peer addr is really 172.17.0.4:0/1370440833 (socket is 172.17.0.4:60374/0) 2018-07-19 20:09:37.725529 7fa848164700 1 mon.a@0(leader).auth v4 client did not provide supported auth type 2018-07-19 20:09:38.398113 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v34: 72 pgs: 72 active+undersized+degraded; 62472 kB data, 52266 MB used, 516 GB / 598 GB avail; 1821 B/s wr, 5 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:39.736995 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v35: 72 pgs: 72 active+undersized+degraded; 62484 kB data, 52266 MB used, 516 GB / 598 GB avail; 2719 B/s wr, 1 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:43.287151 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v36: 72 pgs: 72 active+undersized+degraded; 62484 kB data, 52267 MB used, 516 GB / 598 GB avail; 3071 B/s wr, 1 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:44.579186 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v37: 72 pgs: 72 active+undersized+degraded; 62488 kB data, 52267 MB used, 516 GB / 598 GB avail; 1257 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:48.306203 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v38: 72 pgs: 72 active+undersized+degraded; 62488 kB data, 52267 MB used, 516 GB / 598 GB avail; 1228 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:49.757327 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v39: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 612 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:53.315215 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v40: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 614 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:54.924645 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v41: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 208 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:58.292115 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v42: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 204 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:59.789663 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v43: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:03.318638 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v44: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:04.593809 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v45: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:08.653361 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v46: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:13.378895 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v47: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:14.679469 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v48: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:18.377682 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v49: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:19.682305 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v50: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:23.325197 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v51: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:23.375068 7fa848965700 0 mon.a@0(leader).data_health(3) update_stats avail 86% total 299 GB, used 26142 MB, avail 258 GB 2018-07-19 20:10:25.392642 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v52: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:28.309984 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v53: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52276 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:29.494191 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v54: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52284 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:33.319690 7fa849967700 0 log_channel(cluster) log [INF] : pgmap v55: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52284 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) /var/log/ceph/ceph-client.admin.log 2018-07-19 20:09:29.490108 7fde94bfdf80 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-fuse, pid 578 2018-07-19 20:09:29.559966 7fde94bfdf80 -1 init, newargv = 0x563ec7c49020 newargc=11 2018-07-19 20:09:30.589818 7faef1bf2f80 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-fuse, pid 601 2018-07-19 20:09:30.663756 7faef1bf2f80 -1 init, newargv = 0x557236137020 newargc=11 2018-07-19 20:09:31.695131 7fd02a522f80 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-fuse, pid 624 2018-07-19 20:09:31.760647 7fd02a522f80 -1 init, newargv = 0x564d079dd020 newargc=11 /var/log/ceph/ceph.log 2018-07-19 20:08:22.927277 unknown.0 :/0 0 : [INF] mkfs 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:22.492683 mon.0 172.17.0.3:6789/0 1 : cluster [INF] mon.a@0 won leader election with quorum 0 2018-07-19 20:08:22.810998 mon.0 172.17.0.3:6789/0 2 : cluster [INF] mon.a@0 won leader election with quorum 0 2018-07-19 20:08:22.893513 mon.0 172.17.0.3:6789/0 3 : cluster [INF] monmap e1: 1 mons at {a=172.17.0.3:6789/0} 2018-07-19 20:08:22.960508 mon.0 172.17.0.3:6789/0 4 : cluster [INF] pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:23.153909 mon.0 172.17.0.3:6789/0 5 : cluster [INF] fsmap e1: 2018-07-19 20:08:23.198311 mon.0 172.17.0.3:6789/0 6 : cluster [INF] osdmap e1: 0 osds: 0 up, 0 in 2018-07-19 20:08:23.312685 mon.0 172.17.0.3:6789/0 7 : cluster [INF] pgmap v2: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:24.843274 mon.0 172.17.0.3:6789/0 11 : cluster [INF] osdmap e2: 0 osds: 0 up, 0 in 2018-07-19 20:08:24.914960 mon.0 172.17.0.3:6789/0 12 : cluster [INF] pgmap v3: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:26.400333 mon.0 172.17.0.3:6789/0 15 : cluster [INF] osdmap e3: 1 osds: 0 up, 0 in 2018-07-19 20:08:26.583873 mon.0 172.17.0.3:6789/0 16 : cluster [INF] pgmap v4: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:30.139341 mon.0 172.17.0.3:6789/0 21 : cluster [INF] osdmap e4: 1 osds: 0 up, 0 in 2018-07-19 20:08:30.234758 mon.0 172.17.0.3:6789/0 22 : cluster [INF] pgmap v5: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:31.400975 mon.0 172.17.0.3:6789/0 24 : cluster [INF] osd.0 172.17.0.3:6800/177 boot 2018-07-19 20:08:31.527348 mon.0 172.17.0.3:6789/0 26 : cluster [INF] osdmap e5: 2 osds: 1 up, 1 in 2018-07-19 20:08:31.579128 mon.0 172.17.0.3:6789/0 27 : cluster [INF] pgmap v6: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:32.856280 mon.0 172.17.0.3:6789/0 28 : cluster [INF] osdmap e6: 2 osds: 1 up, 1 in 2018-07-19 20:08:32.961371 mon.0 172.17.0.3:6789/0 29 : cluster [INF] pgmap v7: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:36.516273 mon.0 172.17.0.3:6789/0 34 : cluster [INF] osdmap e7: 2 osds: 1 up, 1 in 2018-07-19 20:08:36.600536 mon.0 172.17.0.3:6789/0 35 : cluster [INF] pgmap v8: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail 2018-07-19 20:08:36.795314 mon.0 172.17.0.3:6789/0 36 : cluster [INF] mds.? 172.17.0.3:6808/602994783 up:boot 2018-07-19 20:08:36.795528 mon.0 172.17.0.3:6789/0 37 : cluster [INF] fsmap e2:, 1 up:standby 2018-07-19 20:08:37.833388 mon.0 172.17.0.3:6789/0 38 : cluster [INF] pgmap v9: 64 pgs: 63 creating+peering, 1 creating; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:38.041476 mon.0 172.17.0.3:6789/0 39 : cluster [INF] osd.1 172.17.0.3:6804/333 boot 2018-07-19 20:08:38.199873 mon.0 172.17.0.3:6789/0 40 : cluster [INF] osdmap e8: 2 osds: 2 up, 2 in 2018-07-19 20:08:38.346980 mon.0 172.17.0.3:6789/0 41 : cluster [INF] pgmap v10: 64 pgs: 1 creating, 63 creating+peering; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:39.762048 mon.0 172.17.0.3:6789/0 42 : cluster [INF] osdmap e9: 2 osds: 2 up, 2 in 2018-07-19 20:08:39.840227 mon.0 172.17.0.3:6789/0 43 : cluster [INF] pgmap v11: 64 pgs: 1 creating, 63 creating+peering; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:41.988593 mon.0 172.17.0.3:6789/0 44 : cluster [INF] pgmap v12: 64 pgs: 54 creating+peering, 10 creating+activating+undersized+degraded; 0 bytes data, 26783 MB used, 257 GB / 299 GB avail 2018-07-19 20:08:43.348389 mon.0 172.17.0.3:6789/0 45 : cluster [INF] pgmap v13: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 53567 MB used, 515 GB / 598 GB avail 2018-07-19 20:08:48.439739 mon.0 172.17.0.3:6789/0 46 : cluster [INF] pgmap v14: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 53183 MB used, 515 GB / 598 GB avail 2018-07-19 20:08:53.593137 mon.0 172.17.0.3:6789/0 47 : cluster [INF] pgmap v15: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 52791 MB used, 516 GB / 598 GB avail 2018-07-19 20:08:58.594347 mon.0 172.17.0.3:6789/0 48 : cluster [INF] pgmap v16: 64 pgs: 58 creating+peering, 6 creating+activating+undersized+degraded; 0 bytes data, 52791 MB used, 516 GB / 598 GB avail 2018-07-19 20:08:59.182086 mon.0 172.17.0.3:6789/0 49 : cluster [INF] osdmap e10: 2 osds: 2 up, 2 in 2018-07-19 20:08:59.312901 mon.0 172.17.0.3:6789/0 50 : cluster [INF] pgmap v17: 64 pgs: 53 creating+peering, 2 undersized+degraded+peered, 9 creating+activating+undersized+degraded; 0 bytes data, 52016 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:03.390872 mon.0 172.17.0.3:6789/0 51 : cluster [INF] pgmap v18: 64 pgs: 21 creating+peering, 32 active+undersized+degraded, 2 undersized+degraded+peered, 9 creating+activating+undersized+degraded; 0 bytes data, 52016 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:04.929901 mon.0 172.17.0.3:6789/0 52 : cluster [INF] pgmap v19: 64 pgs: 41 active+undersized+degraded, 23 creating+activating+undersized+degraded; 0 bytes data, 52017 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:08.305990 mon.0 172.17.0.3:6789/0 53 : cluster [INF] pgmap v20: 64 pgs: 41 active+undersized+degraded, 23 creating+activating+undersized+degraded; 0 bytes data, 52017 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:09.820130 mon.0 172.17.0.3:6789/0 54 : cluster [INF] osdmap e11: 2 osds: 2 up, 2 in 2018-07-19 20:09:09.885711 mon.0 172.17.0.3:6789/0 55 : cluster [INF] pgmap v21: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:09.952655 mon.0 172.17.0.3:6789/0 56 : cluster [INF] pgmap v22: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:11.106091 mon.0 172.17.0.3:6789/0 57 : cluster [INF] osdmap e12: 2 osds: 2 up, 2 in 2018-07-19 20:09:11.166637 mon.0 172.17.0.3:6789/0 58 : cluster [INF] pgmap v23: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:13.543228 mon.0 172.17.0.3:6789/0 59 : cluster [INF] pgmap v24: 64 pgs: 52 active+undersized+degraded, 12 creating+activating+undersized+degraded; 0 bytes data, 52018 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:15.136147 mon.0 172.17.0.3:6789/0 60 : cluster [INF] pgmap v25: 64 pgs: 61 active+undersized+degraded, 3 creating+activating+undersized+degraded; 0 bytes data, 52019 MB used, 517 GB / 598 GB avail 2018-07-19 20:09:21.177833 mon.0 172.17.0.3:6789/0 61 : cluster [INF] pgmap v26: 64 pgs: 64 active+undersized+degraded; 24576 kB data, 52051 MB used, 517 GB / 598 GB avail; 1366 B/s rd, 4683 kB/s wr, 5 op/s; 18/27 objects degraded (66.667%) 2018-07-19 20:09:23.793888 mon.0 172.17.0.3:6789/0 62 : cluster [INF] pgmap v27: 64 pgs: 64 active+undersized+degraded; 53248 kB data, 52124 MB used, 516 GB / 598 GB avail; 1748 B/s rd, 6060 kB/s wr, 5 op/s; 32/48 objects degraded (66.667%) 2018-07-19 20:09:25.112057 mon.0 172.17.0.3:6789/0 64 : cluster [INF] pgmap v28: 64 pgs: 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 1308 B/s rd, 5889 kB/s wr, 4 op/s; 36/54 objects degraded (66.667%) 2018-07-19 20:09:25.436545 mon.0 172.17.0.3:6789/0 66 : cluster [INF] osdmap e13: 2 osds: 2 up, 2 in 2018-07-19 20:09:25.480032 mon.0 172.17.0.3:6789/0 67 : cluster [INF] pgmap v29: 68 pgs: 4 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 4244 kB/s wr, 2 op/s; 36/54 objects degraded (66.667%) 2018-07-19 20:09:26.781931 mon.0 172.17.0.3:6789/0 70 : cluster [INF] osdmap e14: 2 osds: 2 up, 2 in 2018-07-19 20:09:26.850739 mon.0 172.17.0.3:6789/0 71 : cluster [INF] pgmap v30: 72 pgs: 8 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 36/54 objects degraded (66.667%) 2018-07-19 20:09:27.636796 mon.0 172.17.0.3:6789/0 73 : cluster [INF] osdmap e15: 2 osds: 2 up, 2 in 2018-07-19 20:09:27.703857 mon.0 172.17.0.3:6789/0 74 : cluster [INF] pgmap v31: 72 pgs: 8 creating, 64 active+undersized+degraded; 61440 kB data, 52204 MB used, 516 GB / 598 GB avail; 36/54 objects degraded (66.667%) 2018-07-19 20:09:28.505059 mon.0 172.17.0.3:6789/0 76 : cluster [INF] fsmap e3: 0/0/1 up, 1 up:standby 2018-07-19 20:09:28.586553 mon.0 172.17.0.3:6789/0 77 : cluster [INF] fsmap e4: 1/1/1 up {0=cephfs=up:creating} 2018-07-19 20:09:29.494964 mon.0 172.17.0.3:6789/0 78 : cluster [INF] pgmap v32: 72 pgs: 7 creating+activating+undersized+degraded, 65 active+undersized+degraded; 62464 kB data, 52246 MB used, 516 GB / 598 GB avail; 409 kB/s wr, 0 op/s; 38/57 objects degraded (66.667%) 2018-07-19 20:09:31.734155 mon.0 172.17.0.3:6789/0 79 : cluster [INF] mds.0 172.17.0.3:6808/602994783 up:active 2018-07-19 20:09:31.734387 mon.0 172.17.0.3:6789/0 80 : cluster [INF] fsmap e5: 1/1/1 up {0=cephfs=up:active} 2018-07-19 20:09:34.073385 mon.0 172.17.0.3:6789/0 81 : cluster [INF] pgmap v33: 72 pgs: 72 active+undersized+degraded; 62472 kB data, 52266 MB used, 516 GB / 598 GB avail; 165 kB/s wr, 7 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:38.398124 mon.0 172.17.0.3:6789/0 82 : cluster [INF] pgmap v34: 72 pgs: 72 active+undersized+degraded; 62472 kB data, 52266 MB used, 516 GB / 598 GB avail; 1821 B/s wr, 5 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:39.737008 mon.0 172.17.0.3:6789/0 83 : cluster [INF] pgmap v35: 72 pgs: 72 active+undersized+degraded; 62484 kB data, 52266 MB used, 516 GB / 598 GB avail; 2719 B/s wr, 1 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:43.287180 mon.0 172.17.0.3:6789/0 84 : cluster [INF] pgmap v36: 72 pgs: 72 active+undersized+degraded; 62484 kB data, 52267 MB used, 516 GB / 598 GB avail; 3071 B/s wr, 1 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:44.579202 mon.0 172.17.0.3:6789/0 85 : cluster [INF] pgmap v37: 72 pgs: 72 active+undersized+degraded; 62488 kB data, 52267 MB used, 516 GB / 598 GB avail; 1257 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:48.306218 mon.0 172.17.0.3:6789/0 86 : cluster [INF] pgmap v38: 72 pgs: 72 active+undersized+degraded; 62488 kB data, 52267 MB used, 516 GB / 598 GB avail; 1228 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:49.757385 mon.0 172.17.0.3:6789/0 87 : cluster [INF] pgmap v39: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 612 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:53.315276 mon.0 172.17.0.3:6789/0 88 : cluster [INF] pgmap v40: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 614 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:54.924712 mon.0 172.17.0.3:6789/0 89 : cluster [INF] pgmap v41: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 208 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:58.292123 mon.0 172.17.0.3:6789/0 90 : cluster [INF] pgmap v42: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 204 B/s wr, 0 op/s; 80/120 objects degraded (66.667%) 2018-07-19 20:09:59.789666 mon.0 172.17.0.3:6789/0 91 : cluster [INF] pgmap v43: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:03.318666 mon.0 172.17.0.3:6789/0 92 : cluster [INF] pgmap v44: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:04.593821 mon.0 172.17.0.3:6789/0 93 : cluster [INF] pgmap v45: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:08.653384 mon.0 172.17.0.3:6789/0 94 : cluster [INF] pgmap v46: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:13.378899 mon.0 172.17.0.3:6789/0 95 : cluster [INF] pgmap v47: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:14.679485 mon.0 172.17.0.3:6789/0 96 : cluster [INF] pgmap v48: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52267 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:18.377706 mon.0 172.17.0.3:6789/0 97 : cluster [INF] pgmap v49: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:19.682320 mon.0 172.17.0.3:6789/0 98 : cluster [INF] pgmap v50: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:23.325216 mon.0 172.17.0.3:6789/0 99 : cluster [INF] pgmap v51: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:25.392655 mon.0 172.17.0.3:6789/0 100 : cluster [INF] pgmap v52: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52268 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:28.309993 mon.0 172.17.0.3:6789/0 101 : cluster [INF] pgmap v53: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52276 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) 2018-07-19 20:10:29.494197 mon.0 172.17.0.3:6789/0 102 : cluster [INF] pgmap v54: 72 pgs: 72 active+undersized+degraded; 62490 kB data, 52284 MB used, 516 GB / 598 GB avail; 80/120 objects degraded (66.667%) /var/log/ceph/ceph-osd.0.log 2018-07-19 20:08:26.358191 7fbd529f8940 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 113 2018-07-19 20:08:26.421928 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) mkfs in /var/lib/ceph/osd/ceph-0 2018-07-19 20:08:26.422318 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) mkfs generated fsid b3c5545e-9269-4ef3-86ca-e2a25802b808 2018-07-19 20:08:26.582200 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) write_version_stamp 4 2018-07-19 20:08:26.791933 7fbd529f8940 0 filestore(/var/lib/ceph/osd/ceph-0) backend generic (magic 0x794c7630) 2018-07-19 20:08:26.925972 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) leveldb db exists/created 2018-07-19 20:08:26.926089 7fbd529f8940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:26.926461 7fbd529f8940 1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 9: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:26.926564 7fbd529f8940 0 filestore(/var/lib/ceph/osd/ceph-0) mkjournal created journal on /var/lib/ceph/osd/ceph-0/journal 2018-07-19 20:08:26.984573 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) mkfs done in /var/lib/ceph/osd/ceph-0 2018-07-19 20:08:26.984839 7fbd529f8940 0 filestore(/var/lib/ceph/osd/ceph-0) backend generic (magic 0x794c7630) 2018-07-19 20:08:26.985464 7fbd529f8940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2018-07-19 20:08:26.985491 7fbd529f8940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option 2018-07-19 20:08:26.985533 7fbd529f8940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice is supported 2018-07-19 20:08:27.057310 7fbd529f8940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel) 2018-07-19 20:08:27.057709 7fbd529f8940 0 filestore(/var/lib/ceph/osd/ceph-0) limited size xattrs 2018-07-19 20:08:27.058269 7fbd529f8940 1 leveldb: Recovering log #3 2018-07-19 20:08:27.324185 7fbd529f8940 1 leveldb: Delete type=0 #3 2018-07-19 20:08:27.324283 7fbd529f8940 1 leveldb: Delete type=3 #2 2018-07-19 20:08:27.324433 7fbd529f8940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:27.324467 7fbd529f8940 0 filestore(/var/lib/ceph/osd/ceph-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2018-07-19 20:08:27.324699 7fbd529f8940 1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 14: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:27.324860 7fbd529f8940 1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 14: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:27.325431 7fbd529f8940 1 filestore(/var/lib/ceph/osd/ceph-0) upgrade 2018-07-19 20:08:27.325542 7fbd529f8940 -1 filestore(/var/lib/ceph/osd/ceph-0) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory 2018-07-19 20:08:27.988875 7fbd529f8940 1 journal close /var/lib/ceph/osd/ceph-0/journal 2018-07-19 20:08:28.019405 7fbd529f8940 -1 created object store /var/lib/ceph/osd/ceph-0 for osd.0 fsid 3cc57754-8c43-4f67-b7bb-ee0a564141d3 2018-07-19 20:08:28.019502 7fbd529f8940 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2018-07-19 20:08:28.020161 7fbd529f8940 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2018-07-19 20:08:30.157957 7f3b84064940 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 177 2018-07-19 20:08:30.161970 7f3b84064940 0 pidfile_write: ignore empty --pid-file 2018-07-19 20:08:30.244824 7f3b84064940 0 filestore(/var/lib/ceph/osd/ceph-0) backend generic (magic 0x794c7630) 2018-07-19 20:08:30.245502 7f3b84064940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option 2018-07-19 20:08:30.245563 7f3b84064940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option 2018-07-19 20:08:30.245648 7f3b84064940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice is supported 2018-07-19 20:08:30.290869 7f3b84064940 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel) 2018-07-19 20:08:30.291319 7f3b84064940 0 filestore(/var/lib/ceph/osd/ceph-0) limited size xattrs 2018-07-19 20:08:30.292083 7f3b84064940 1 leveldb: Recovering log #5 2018-07-19 20:08:30.292172 7f3b84064940 1 leveldb: Level-0 table #7: started 2018-07-19 20:08:30.324532 7f3b84064940 1 leveldb: Level-0 table #7: 146 bytes OK 2018-07-19 20:08:30.425411 7f3b84064940 1 leveldb: Delete type=0 #5 2018-07-19 20:08:30.425665 7f3b84064940 1 leveldb: Delete type=3 #4 2018-07-19 20:08:30.426013 7f3b84064940 -1 FileJournal::_open_any: aio not supported without directio; disabling aio 2018-07-19 20:08:30.426100 7f3b84064940 0 filestore(/var/lib/ceph/osd/ceph-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled 2018-07-19 20:08:30.426510 7f3b84064940 1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 18: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:30.427193 7f3b84064940 1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 18: 134217728 bytes, block size 4096 bytes, directio = 0, aio = 0 2018-07-19 20:08:30.428038 7f3b84064940 1 filestore(/var/lib/ceph/osd/ceph-0) upgrade 2018-07-19 20:08:30.480217 7f3b84064940 0 cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan 2018-07-19 20:08:30.485958 7f3b84064940 0 cls/hello/cls_hello.cc:305: loading cls_hello 2018-07-19 20:08:30.489630 7f3b84064940 0 osd.0 0 crush map has features 2199057072128, adjusting msgr requires for clients 2018-07-19 20:08:30.489694 7f3b84064940 0 osd.0 0 crush map has features 2199057072128 was 8705, adjusting msgr requires for mons 2018-07-19 20:08:30.489730 7f3b84064940 0 osd.0 0 crush map has features 2199057072128, adjusting msgr requires for osds 2018-07-19 20:08:30.489872 7f3b84064940 0 osd.0 0 load_pgs 2018-07-19 20:08:30.489957 7f3b84064940 0 osd.0 0 load_pgs opened 0 pgs 2018-07-19 20:08:30.490013 7f3b84064940 0 osd.0 0 using 0 op queue with priority op cut off at 64. 2018-07-19 20:08:30.491416 7f3b84064940 -1 osd.0 0 log_to_monitors {default=true} 2018-07-19 20:08:30.495649 7f3b84064940 0 osd.0 0 done with init, starting boot process 2018-07-19 20:08:31.498227 7f3b74b3c700 0 osd.0 5 crush map has features 2200130813952, adjusting msgr requires for clients 2018-07-19 20:08:31.498233 7f3b74b3c700 0 osd.0 5 crush map has features 2200130813952 was 2199057080833, adjusting msgr requires for mons 2018-07-19 20:08:31.498241 7f3b74b3c700 0 osd.0 5 crush map has features 2200130813952, adjusting msgr requires for osds /var/log/ceph/ceph.audit.log 2018-07-19 20:08:24.286959 mon.0 172.17.0.3:6789/0 9 : audit [INF] from='client.? 172.17.0.3:0/2650023655' entity='client.admin' cmd=[{"prefix": "osd setcrushmap"}]: dispatch 2018-07-19 20:08:24.739033 mon.0 172.17.0.3:6789/0 10 : audit [INF] from='client.? 172.17.0.3:0/2650023655' entity='client.admin' cmd='[{"prefix": "osd setcrushmap"}]': finished 2018-07-19 20:08:25.297968 mon.0 172.17.0.3:6789/0 13 : audit [INF] from='client.? 172.17.0.3:0/2350260045' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch 2018-07-19 20:08:26.290119 mon.0 172.17.0.3:6789/0 14 : audit [INF] from='client.? 172.17.0.3:0/2350260045' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished 2018-07-19 20:08:28.476077 mon.0 172.17.0.3:6789/0 17 : audit [INF] from='client.? 172.17.0.3:0/308550940' entity='client.admin' cmd=[{"prefix": "auth add", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow rwx"]}]: dispatch 2018-07-19 20:08:28.627976 mon.0 172.17.0.3:6789/0 18 : audit [INF] from='client.? 172.17.0.3:0/308550940' entity='client.admin' cmd='[{"prefix": "auth add", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow rwx"]}]': finished 2018-07-19 20:08:29.197281 mon.0 172.17.0.3:6789/0 19 : audit [INF] from='client.? 172.17.0.3:0/1567463050' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 0, "weight": 1.0}]: dispatch 2018-07-19 20:08:30.098945 mon.0 172.17.0.3:6789/0 20 : audit [INF] from='client.? 172.17.0.3:0/1567463050' entity='client.admin' cmd='[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 0, "weight": 1.0}]': finished 2018-07-19 20:08:30.657860 mon.0 172.17.0.3:6789/0 23 : audit [INF] from='client.? 172.17.0.3:0/2848611712' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch 2018-07-19 20:08:31.401323 mon.0 172.17.0.3:6789/0 25 : audit [INF] from='client.? 172.17.0.3:0/2848611712' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished 2018-07-19 20:08:34.572919 mon.0 172.17.0.3:6789/0 30 : audit [INF] from='client.? 172.17.0.3:0/249033197' entity='client.admin' cmd=[{"prefix": "auth add", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow rwx"]}]: dispatch 2018-07-19 20:08:34.760467 mon.0 172.17.0.3:6789/0 31 : audit [INF] from='client.? 172.17.0.3:0/249033197' entity='client.admin' cmd='[{"prefix": "auth add", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow rwx"]}]': finished 2018-07-19 20:08:35.282134 mon.0 172.17.0.3:6789/0 32 : audit [INF] from='client.? 172.17.0.3:0/2669058995' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 1, "weight": 1.0}]: dispatch 2018-07-19 20:08:36.441582 mon.0 172.17.0.3:6789/0 33 : audit [INF] from='client.? 172.17.0.3:0/2669058995' entity='client.admin' cmd='[{"prefix": "osd crush add", "args": ["root=default", "host=cephbox"], "id": 1, "weight": 1.0}]': finished 2018-07-19 20:09:24.143259 mon.0 172.17.0.3:6789/0 63 : audit [INF] from='client.? 172.17.0.3:0/3493502123' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_data"}]: dispatch 2018-07-19 20:09:25.354558 mon.0 172.17.0.3:6789/0 65 : audit [INF] from='client.? 172.17.0.3:0/3493502123' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_data"}]': finished 2018-07-19 20:09:25.876567 mon.0 172.17.0.3:6789/0 68 : audit [INF] from='client.? 172.17.0.3:0/2007993106' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_metadata"}]: dispatch 2018-07-19 20:09:26.722582 mon.0 172.17.0.3:6789/0 69 : audit [INF] from='client.? 172.17.0.3:0/2007993106' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 4, "pool": "cephfs_metadata"}]': finished 2018-07-19 20:09:27.283725 mon.0 172.17.0.3:6789/0 72 : audit [INF] from='client.? 172.17.0.3:0/2849102185' entity='client.admin' cmd=[{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]: dispatch 2018-07-19 20:09:28.439087 mon.0 172.17.0.3:6789/0 75 : audit [INF] from='client.? 172.17.0.3:0/2849102185' entity='client.admin' cmd='[{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]': finished /var/log/ceph/ceph-mds.cephfs.log 2018-07-19 20:08:36.510081 7f60678ed280 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-mds, pid 338 2018-07-19 20:08:36.511515 7f60678ed280 0 pidfile_write: ignore empty --pid-file 2018-07-19 20:08:36.795592 7f606183e700 1 mds.cephfs handle_mds_map standby 2018-07-19 20:09:28.582482 7f606183e700 1 mds.0.4 handle_mds_map i am now mds.0.4 2018-07-19 20:09:28.582490 7f606183e700 1 mds.0.4 handle_mds_map state change up:boot --> up:creating 2018-07-19 20:09:28.582835 7f606183e700 0 mds.0.cache creating system inode with ino:1 2018-07-19 20:09:28.583301 7f606183e700 0 mds.0.cache creating system inode with ino:100 2018-07-19 20:09:28.583311 7f606183e700 0 mds.0.cache creating system inode with ino:600 2018-07-19 20:09:28.583495 7f606183e700 0 mds.0.cache creating system inode with ino:601 2018-07-19 20:09:28.583660 7f606183e700 0 mds.0.cache creating system inode with ino:602 2018-07-19 20:09:28.583836 7f606183e700 0 mds.0.cache creating system inode with ino:603 2018-07-19 20:09:28.583950 7f606183e700 0 mds.0.cache creating system inode with ino:604 2018-07-19 20:09:28.584081 7f606183e700 0 mds.0.cache creating system inode with ino:605 2018-07-19 20:09:28.584186 7f606183e700 0 mds.0.cache creating system inode with ino:606 2018-07-19 20:09:28.584321 7f606183e700 0 mds.0.cache creating system inode with ino:607 2018-07-19 20:09:28.584445 7f606183e700 0 mds.0.cache creating system inode with ino:608 2018-07-19 20:09:28.584561 7f606183e700 0 mds.0.cache creating system inode with ino:609 2018-07-19 20:09:31.443514 7f605cf34700 1 mds.0.4 creating_done 2018-07-19 20:09:31.734630 7f606183e700 1 mds.0.4 handle_mds_map i am now mds.0.4 2018-07-19 20:09:31.734636 7f606183e700 1 mds.0.4 handle_mds_map state change up:creating --> up:active 2018-07-19 20:09:31.734649 7f606183e700 1 mds.0.4 recovery_done -- successful recovery! 2018-07-19 20:09:31.734694 7f606183e700 1 mds.0.4 active_start 2018-07-19 20:09:31.767394 7f605bb2e700 1 mds.client.admin ms_verify_authorizer: cannot decode auth caps bl of length 0 2018-07-19 20:09:33.331043 7f605b92c700 1 mds.client.admin ms_verify_authorizer: cannot decode auth caps bl of length 0 2018-07-19 20:09:37.365212 7f605b92c700 1 mds.client.admin ms_verify_authorizer: cannot decode auth caps bl of length 0