Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimize memory internal fragmentation for Bloom filters #6427

Closed
wants to merge 11 commits into from
1 change: 1 addition & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

### New Features
* DB identity (`db_id`) and DB session identity (`db_session_id`) are added to table properties and stored in SST files. SST files generated from SstFileWriter and Repairer have DB identity “SST Writer” and “DB Repairer”, respectively. Their DB session IDs are generated in the same way as `DB::GetDbSessionId`. The session ID for SstFileWriter (resp., Repairer) resets every time `SstFileWriter::Open` (resp., `Repairer::Run`) is called.
* Added experimental option BlockBasedTableOptions::optimize_filters_for_memory for reducing allocated memory size of Bloom filters (~10% savings with Jemalloc) while preserving the same general accuracy. To have an effect, the option requires format_version=5 and malloc_usable_size. Enabling this option is forward and backward compatible with existing format_version=5.

### Bug Fixes
* Fail recovery and report once hitting a physical log record checksum mismatch, while reading MANIFEST. RocksDB should not continue processing the MANIFEST any further.
Expand Down
1 change: 1 addition & 0 deletions db_stress_tool/db_stress_common.h
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,7 @@ DECLARE_int32(reopen);
DECLARE_double(bloom_bits);
DECLARE_bool(use_block_based_filter);
DECLARE_bool(partition_filters);
DECLARE_bool(optimize_filters_for_memory);
DECLARE_int32(index_type);
DECLARE_string(db);
DECLARE_string(secondaries_base);
Expand Down
5 changes: 5 additions & 0 deletions db_stress_tool/db_stress_gflags.cc
Original file line number Diff line number Diff line change
Expand Up @@ -361,6 +361,11 @@ DEFINE_bool(partition_filters, false,
"use partitioned filters "
"for block-based table");

DEFINE_bool(
optimize_filters_for_memory,
ROCKSDB_NAMESPACE::BlockBasedTableOptions().optimize_filters_for_memory,
"Minimize memory footprint of filters");

DEFINE_int32(
index_type,
static_cast<int32_t>(
Expand Down
2 changes: 2 additions & 0 deletions db_stress_tool/db_stress_test_base.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1762,6 +1762,8 @@ void StressTest::Open() {
static_cast<int32_t>(FLAGS_index_block_restart_interval);
block_based_options.filter_policy = filter_policy_;
block_based_options.partition_filters = FLAGS_partition_filters;
block_based_options.optimize_filters_for_memory =
FLAGS_optimize_filters_for_memory;
block_based_options.index_type =
static_cast<BlockBasedTableOptions::IndexType>(FLAGS_index_type);
options_.table_factory.reset(
Expand Down
28 changes: 28 additions & 0 deletions include/rocksdb/table.h
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,34 @@ struct BlockBasedTableOptions {
// incompatible with block-based filters.
bool partition_filters = false;

// EXPERIMENTAL Option to generate Bloom filters that minimize memory
// internal fragmentation.
//
// When false, malloc_usable_size is not available, or format_version < 5,
// filters are generated without regard to internal fragmentation when
// loaded into memory (historical behavior). When true (and
// malloc_usable_size is available and format_version >= 5), then Bloom
// filters are generated to "round up" and "round down" their sizes to
// minimize internal fragmentation when loaded into memory, assuming the
// reading DB has the same memory allocation characteristics as the
// generating DB. This option does not break forward or backward
// compatibility.
//
// While individual filters will vary in bits/key and false positive rate
// when setting is true, the implementation attempts to maintain a weighted
// average FP rate for filters consistent with this option set to false.
//
// With Jemalloc for example, this setting is expected to save about 10% of
// the memory footprint and block cache charge of filters, while increasing
// disk usage of filters by about 1-2% due to encoding efficiency losses
// with variance in bits/key.
//
// Because some memory counted by block cache might be unmapped pages within
// internal fragmentation, this option can increase observed RSS memory
// usage. With cache_index_and_filter_blocks=true, this option makes the
// block cache better at using space it is allowed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we mention in the comment that the implementation is against the best-practice suggestion in malloc_usable_size()'s man page?

bool optimize_filters_for_memory = false;

// Use delta encoding to compress keys in blocks.
// ReadOptions::pin_data requires this option to be disabled.
//
Expand Down
1 change: 1 addition & 0 deletions options/options_settable_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,7 @@ TEST_F(OptionsSettableTest, BlockBasedTableOptionsAllFieldsSettable) {
"block_size_deviation=8;block_restart_interval=4; "
"metadata_block_size=1024;"
"partition_filters=false;"
"optimize_filters_for_memory=true;"
"index_block_restart_interval=4;"
"filter_policy=bloomfilter:4:true;whole_key_filtering=1;"
"format_version=1;"
Expand Down
4 changes: 4 additions & 0 deletions table/block_based/block_based_table_factory.cc
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,10 @@ static std::unordered_map<std::string, OptionTypeInfo>
{offsetof(struct BlockBasedTableOptions, partition_filters),
OptionType::kBoolean, OptionVerificationType::kNormal,
OptionTypeFlags::kNone, 0}},
{"optimize_filters_for_memory",
{offsetof(struct BlockBasedTableOptions, optimize_filters_for_memory),
OptionType::kBoolean, OptionVerificationType::kNormal,
OptionTypeFlags::kNone, 0}},
{"filter_policy",
{offsetof(struct BlockBasedTableOptions, filter_policy),
OptionType::kUnknown, OptionVerificationType::kByNameAllowFromNull,
Expand Down
189 changes: 159 additions & 30 deletions table/block_based/filter_policy.cc
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,12 @@ namespace {
// See description in FastLocalBloomImpl
class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
public:
explicit FastLocalBloomBitsBuilder(const int millibits_per_key)
// Non-null aggregate_rounding_balance implies optimize_filters_for_memory
explicit FastLocalBloomBitsBuilder(
const int millibits_per_key,
std::atomic<int64_t>* aggregate_rounding_balance)
: millibits_per_key_(millibits_per_key),
num_probes_(FastLocalBloomImpl::ChooseNumProbes(millibits_per_key_)) {
aggregate_rounding_balance_(aggregate_rounding_balance) {
assert(millibits_per_key >= 1000);
}

Expand All @@ -48,33 +51,36 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}

virtual Slice Finish(std::unique_ptr<const char[]>* buf) override {
size_t num_entry = hash_entries_.size();
std::unique_ptr<char[]> mutable_buf;
uint32_t len_with_metadata =
CalculateSpace(static_cast<uint32_t>(hash_entries_.size()));
char* data = new char[len_with_metadata];
memset(data, 0, len_with_metadata);
CalculateAndAllocate(num_entry, &mutable_buf, /*update_balance*/ true);

assert(data);
assert(mutable_buf);
assert(len_with_metadata >= 5);

// Compute num_probes after any rounding / adjustments
int num_probes = GetNumProbes(num_entry, len_with_metadata);

uint32_t len = len_with_metadata - 5;
if (len > 0) {
AddAllEntries(data, len);
AddAllEntries(mutable_buf.get(), len, num_probes);
}

assert(hash_entries_.empty());

// See BloomFilterPolicy::GetBloomBitsReader re: metadata
// -1 = Marker for newer Bloom implementations
data[len] = static_cast<char>(-1);
mutable_buf[len] = static_cast<char>(-1);
// 0 = Marker for this sub-implementation
data[len + 1] = static_cast<char>(0);
mutable_buf[len + 1] = static_cast<char>(0);
// num_probes (and 0 in upper bits for 64-byte block size)
data[len + 2] = static_cast<char>(num_probes_);
mutable_buf[len + 2] = static_cast<char>(num_probes);
// rest of metadata stays zero

const char* const_data = data;
buf->reset(const_data);
assert(hash_entries_.empty());

return Slice(data, len_with_metadata);
Slice rv(mutable_buf.get(), len_with_metadata);
*buf = std::move(mutable_buf);
return rv;
}

int CalculateNumEntry(const uint32_t bytes) override {
Expand All @@ -84,26 +90,144 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}

uint32_t CalculateSpace(const int num_entry) override {
uint32_t num_cache_lines = 0;
if (millibits_per_key_ > 0 && num_entry > 0) {
num_cache_lines = static_cast<uint32_t>(
(int64_t{num_entry} * millibits_per_key_ + 511999) / 512000);
// NB: the BuiltinFilterBitsBuilder API presumes len fits in uint32_t.
return static_cast<uint32_t>(
CalculateAndAllocate(static_cast<size_t>(num_entry),
/* buf */ nullptr,
/*update_balance*/ false));
}

// To choose size using malloc_usable_size, we have to actually allocate.
uint32_t CalculateAndAllocate(size_t num_entry, std::unique_ptr<char[]>* buf,
bool update_balance) {
std::unique_ptr<char[]> tmpbuf;

// If not for cache line blocks in the filter, what would the target
// length in bytes be?
size_t raw_target_len = static_cast<size_t>(
(uint64_t{num_entry} * millibits_per_key_ + 7999) / 8000);

if (raw_target_len >= size_t{0xffffffc0}) {
// Max supported for this data structure implementation
raw_target_len = size_t{0xffffffc0};
}
return num_cache_lines * 64 + /*metadata*/ 5;

// Round up to nearest multiple of 64 (block size). This adjustment is
// used for target FP rate only so that we don't receive complaints about
// lower FP rate vs. historic Bloom filter behavior.
uint32_t target_len =
static_cast<uint32_t>(raw_target_len + 63) & ~uint32_t{63};

// Return value set to a default; overwritten in some cases
uint32_t rv = target_len + /* metadata */ 5;
#ifdef ROCKSDB_MALLOC_USABLE_SIZE
if (aggregate_rounding_balance_ != nullptr) {
// Do optimize_filters_for_memory, using malloc_usable_size.
// Approach: try to keep FP rate balance better than or on
// target (negative aggregate_rounding_balance_). We can then select a
// lower bound filter size (within reasonable limits) that gets us as
// close to on target as possible. We request allocation for that filter
// size and use malloc_usable_size to "round up" to the actual
// allocation size.

// Race condition on balance is OK because it can only cause temporary
// skew in rounding up vs. rounding down, as long as updates are atomic
// and relative.
int64_t balance = aggregate_rounding_balance_->load();

double target_fp_rate = EstimatedFpRate(num_entry, target_len + 5);
double rv_fp_rate = target_fp_rate;

if (balance < 0) {
// See formula for BloomFilterPolicy::aggregate_rounding_balance_
double for_balance_fp_rate =
-balance / double{0x100000000} + target_fp_rate;

// To simplify, we just try a few modified smaller sizes. This also
// caps how much we vary filter size vs. target, to avoid outlier
// behavior from excessive variance.
for (uint64_t maybe_len64 :
{uint64_t{3} * target_len / 4, uint64_t{13} * target_len / 16,
uint64_t{7} * target_len / 8, uint64_t{15} * target_len / 16}) {
uint32_t maybe_len =
static_cast<uint32_t>(maybe_len64) & ~uint32_t{63};
double maybe_fp_rate = EstimatedFpRate(num_entry, maybe_len + 5);
if (maybe_fp_rate <= for_balance_fp_rate) {
rv = maybe_len + /* metadata */ 5;
rv_fp_rate = maybe_fp_rate;
break;
}
}
}

// Filter blocks are loaded into block cache with their block trailer.
// We need to make sure that's accounted for in choosing a
// fragmentation-friendly size.
const uint32_t kExtraPadding = kBlockTrailerSize;

// Allocate and adjust for usable size
tmpbuf.reset(new char[rv + kExtraPadding]);
size_t usable = malloc_usable_size(tmpbuf.get());
if (usable > rv + kExtraPadding) {
size_t usable_len = (usable - kExtraPadding - /* metadata */ 5);
if (usable_len >= size_t{0xffffffc0}) {
// Max supported for this data structure implementation
usable_len = size_t{0xffffffc0};
}
rv = (static_cast<uint32_t>(usable_len) & ~uint32_t{63}) +
/* metadata */ 5;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is what man page https://man7.org/linux/man-pages/man3/malloc_usable_size.3.html says:

  The value returned by malloc_usable_size() may be greater than the
       requested size of the allocation because of alignment and minimum
       size constraints.  Although the excess bytes can be overwritten by
       the application without ill effects, this is not good programming
       practice: the number of excess bytes in an allocation depends on the
       underlying implementation.

       The main use of this function is for debugging and introspection.

Are you sure we want to use those bytes? At least consulting jemalloc friends before doing that.

rv_fp_rate = EstimatedFpRate(num_entry, rv);
} else {
assert(usable == rv + kExtraPadding);
}
memset(tmpbuf.get(), 0, rv);

if (update_balance) {
int64_t diff = static_cast<int64_t>((rv_fp_rate - target_fp_rate) *
double{0x100000000});
*aggregate_rounding_balance_ += diff;
}
}
#else
(void)update_balance;
#endif // ROCKSDB_MALLOC_USABLE_SIZE
if (buf) {
if (tmpbuf) {
*buf = std::move(tmpbuf);
} else {
buf->reset(new char[rv]());
}
}
return rv;
}

double EstimatedFpRate(size_t keys, size_t bytes) override {
return FastLocalBloomImpl::EstimatedFpRate(keys, bytes - /*metadata*/ 5,
num_probes_, /*hash bits*/ 64);
double EstimatedFpRate(size_t keys, size_t len_with_metadata) override {
int num_probes = GetNumProbes(keys, len_with_metadata);
return FastLocalBloomImpl::EstimatedFpRate(
keys, len_with_metadata - /*metadata*/ 5, num_probes, /*hash bits*/ 64);
}

private:
void AddAllEntries(char* data, uint32_t len) {
// Compute num_probes after any rounding / adjustments
int GetNumProbes(size_t keys, size_t len_with_metadata) {
uint64_t millibits = uint64_t{len_with_metadata - 5} * 8000;
int actual_millibits_per_key =
static_cast<int>(millibits / std::max(keys, size_t{1}));
// BEGIN XXX/TODO(peterd): preserving old/default behavior for now to
// minimize unit test churn. Remove this some time.
if (!aggregate_rounding_balance_) {
actual_millibits_per_key = millibits_per_key_;
}
// END XXX/TODO
return FastLocalBloomImpl::ChooseNumProbes(actual_millibits_per_key);
}

void AddAllEntries(char* data, uint32_t len, int num_probes) {
// Simple version without prefetching:
//
// for (auto h : hash_entries_) {
// FastLocalBloomImpl::AddHash(Lower32of64(h), Upper32of64(h), len,
// num_probes_, data);
// num_probes, data);
// }

const size_t num_entries = hash_entries_.size();
Expand All @@ -129,7 +253,7 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
uint32_t& hash_ref = hashes[i & kBufferMask];
uint32_t& byte_offset_ref = byte_offsets[i & kBufferMask];
// Process (add)
FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes_,
FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes,
data + byte_offset_ref);
// And buffer
uint64_t h = hash_entries_.front();
Expand All @@ -141,13 +265,16 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {

// Finish processing
for (i = 0; i <= kBufferMask && i < num_entries; ++i) {
FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes_,
FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes,
data + byte_offsets[i]);
}
}

// Target allocation per added key, in thousandths of a bit.
int millibits_per_key_;
int num_probes_;
// See BloomFilterPolicy::aggregate_rounding_balance_. If nullptr,
// always "round up" like historic behavior.
std::atomic<int64_t>* aggregate_rounding_balance_;
// A deque avoids unnecessary copying of already-saved values
// and has near-minimal peak memory use.
std::deque<uint64_t> hash_entries_;
Expand Down Expand Up @@ -457,7 +584,7 @@ const std::vector<BloomFilterPolicy::Mode> BloomFilterPolicy::kAllUserModes = {
};

BloomFilterPolicy::BloomFilterPolicy(double bits_per_key, Mode mode)
: mode_(mode), warned_(false) {
: mode_(mode), warned_(false), aggregate_rounding_balance_(0) {
// Sanitize bits_per_key
if (bits_per_key < 1.0) {
bits_per_key = 1.0;
Expand Down Expand Up @@ -549,6 +676,7 @@ FilterBitsBuilder* BloomFilterPolicy::GetFilterBitsBuilder() const {
FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
const FilterBuildingContext& context) const {
Mode cur = mode_;
bool offm = context.table_options.optimize_filters_for_memory;
// Unusual code construction so that we can have just
// one exhaustive switch without (risky) recursion
for (int i = 0; i < 2; ++i) {
Expand All @@ -563,7 +691,8 @@ FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
case kDeprecatedBlock:
return nullptr;
case kFastLocalBloom:
return new FastLocalBloomBitsBuilder(millibits_per_key_);
return new FastLocalBloomBitsBuilder(
millibits_per_key_, offm ? &aggregate_rounding_balance_ : nullptr);
case kLegacyBloom:
if (whole_bits_per_key_ >= 14 && context.info_log &&
!warned_.load(std::memory_order_relaxed)) {
Expand Down
10 changes: 10 additions & 0 deletions table/block_based/filter_policy_internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,16 @@ class BloomFilterPolicy : public FilterPolicy {
// only report once per BloomFilterPolicy instance, to keep the noise down.)
mutable std::atomic<bool> warned_;

// State for implementing optimize_filters_for_memory. Essentially, this
// tracks a surplus or deficit in total FP rate of filters generated by
// builders under this policy vs. what would have been generated without
// optimize_filters_for_memory.
//
// To avoid floating point weirdness, the actual value is
// Sum over all generated filters f:
// (predicted_fp_rate(f) - predicted_fp_rate(f|o_f_f_m=false)) * 2^32
mutable std::atomic<int64_t> aggregate_rounding_balance_;

// For newer Bloom filter implementation(s)
FilterBitsReader* GetBloomBitsReader(const Slice& contents) const;
};
Expand Down