-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bluefs _allocate unable to allocate 0x90000 on bdev 1 #9885
Comments
Ok I found to change the type, but it did not solve : |
Yes, that command in the toolbox should work. Another way to set it is in the ceph.conf overrides. |
Hi, It has been fixed.
When done, put a sleep inside the pod
Then OSD should be green again |
Closing this since it seems resolved. |
@osaffer I've encountered the same issue and your plan works, thank you very much! |
Some OSD were full If I remember well ... I would say, monitor your OSD :D |
@osaffer hehe, you're right, that ceph instance was full. It is for development and another team supports it so I have no monitoring for it. Yet :) |
You are welcome ... my environment is also a development one so , I had not configured any monitoring. |
https://tracker.ceph.com/issues/53466 |
The issue is fixed in ceph/ceph#48854, and related issues: https://tracker.ceph.com/issues/53899 |
ceph-version: 16.2.6-0
rook-version: v1.7.4
Hi,
One nice morning that all of my osd pod were crashed, except one by node.
When I check OSD log I can see :
ebug -7> 2022-03-10T09:59:05.158+0000 7ff21a9db080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1646906345165107, "job": 1, "event": "recovery_started", "log_files": [17870]}
debug -6> 2022-03-10T09:59:05.158+0000 7ff21a9db080 4 rocksdb: [db_impl/db_impl_open.cc:760] Recovering log #17870 mode 2
debug -5> 2022-03-10T09:59:05.430+0000 7ff21a9db080 3 rocksdb: [le/block_based/filter_policy.cc:584] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5.
debug -4> 2022-03-10T09:59:05.434+0000 7ff21a9db080 1 bluefs _allocate unable to allocate 0x90000 on bdev 1, allocator name block, allocator type hybrid, capacity 0x4ffc00000, block size 0x10000, free 0x0, fragmentation 0, allocated 0x0
debug -3> 2022-03-10T09:59:05.434+0000 7ff21a9db080 -1 bluefs _allocate allocation failed, needed 0x80cbb
debug -2> 2022-03-10T09:59:05.434+0000 7ff21a9db080 -1 bluefs _flush_range allocated: 0x0 offset: 0x0 length: 0x80cbb
debug -1> 2022-03-10T09:59:05.442+0000 7ff21a9db080 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.6/rpm/el8/BUILD/ceph-16.2.6/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7ff21a9db080 time 2022-03-10T09:59:05.440116+0000
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.6/rpm/el8/BUILD/ceph-16.2.6/src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
After some researches, I noticed that some people got more or less the same issue:
They talked about a workaround :
[osd]
bluestore_allocator = bitmap
Can you tell me where I can set this parameter?
I have also added some new disk on each nodes
Thank you very much
The text was updated successfully, but these errors were encountered: