Skip to content

Commit

Permalink
Minimize memory internal fragmentation for Bloom filters
Browse files Browse the repository at this point in the history
Summary: New experimental option BBTO::optimize_filters_for_memory builds
filters that maximize their use of "usable size" from malloc_usable_size,
which is also used to compute block cache charges.

Rather than always "rounding up," we track state in the
BloomFilterPolicy object to mix essentially "rounding down" and
"rounding up" so that the average FP rate of all generated filters is
the same as without the option. (YMMV as heavily accessed filters might
be unluckily lower accuracy.)

Thus, the option near-minimizes what the block cache considers as
"memory used" for a given target Bloom filter false positive rate and
Bloom filter implementation. There are no forward or backward
compatibility issues with this change, though it only works on the
format_version=5 Bloom filter.

With Jemalloc, we see about 10% reduction in memory footprint (and block
cache charge) for Bloom filters, but 1-2% increase in storage footprint,
due to encoding efficiency losses (FP rate is non-linear with bits/key).

Why not weighted random round up/down rather than state tracking? By
only requiring malloc_usable_size, we don't actually know what the next
larger and next smaller usable sizes for the allocator are. We pick a
requested size, accept and use whatever usable size it has, and use the
difference to inform our next choice. This allows us to narrow in on the
right balance without tracking/predicting usable sizes.

Why not weight history of generated filter false positive rates by
number of keys? This could lead to excess skew in small filters after
generating a large filter.

Results with jemalloc (irrelevant details omitted):

(normal keys/filter, but high variance)
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9
Build avg ns/key: 29.6278
Number of filters: 5516
Total size (MB): 200.046
Reported total allocated memory (MB): 220.597
Reported internal fragmentation: 10.2732%
Bits/key stored: 10.0097
Average FP rate %: 0.965228
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
Build avg ns/key: 30.5104
Number of filters: 5464
Total size (MB): 200.015
Reported total allocated memory (MB): 200.322
Reported internal fragmentation: 0.153709%
Bits/key stored: 10.1011
Average FP rate %: 0.966313

(very few keys / filter, optimization not as effective due to ~59 byte
 internal fragmentation in blocked Bloom filter representation)
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9
Build avg ns/key: 29.5649
Number of filters: 162950
Total size (MB): 200.001
Reported total allocated memory (MB): 224.624
Reported internal fragmentation: 12.3117%
Bits/key stored: 10.2951
Average FP rate %: 0.821534
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
Build avg ns/key: 31.8057
Number of filters: 159849
Total size (MB): 200
Reported total allocated memory (MB): 208.846
Reported internal fragmentation: 4.42297%
Bits/key stored: 10.4948
Average FP rate %: 0.811006

(high keys/filter)
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9
Build avg ns/key: 29.7017
Number of filters: 164
Total size (MB): 200.352
Reported total allocated memory (MB): 221.5
Reported internal fragmentation: 10.5552%
Bits/key stored: 10.0003
Average FP rate %: 0.969358
$ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
Build avg ns/key: 30.7131
Number of filters: 160
Total size (MB): 200.928
Reported total allocated memory (MB): 200.938
Reported internal fragmentation: 0.00448054%
Bits/key stored: 10.1852
Average FP rate %: 0.963387

Test Plan: unit test added, 'make check' with gcc, clang and valgrind
  • Loading branch information
pdillinger committed Jun 18, 2020
1 parent 552fd76 commit 15de8d4
Show file tree
Hide file tree
Showing 7 changed files with 282 additions and 37 deletions.
27 changes: 27 additions & 0 deletions include/rocksdb/table.h
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,33 @@ struct BlockBasedTableOptions {
// incompatible with block-based filters.
bool partition_filters = false;

// EXPERIMENTAL Option to generate Bloom filters that minimize memory
// internal fragmentation.
//
// When false, malloc_usable_size is not available, or format_version < 5,
// filters are generated without regard to internal fragmentation when
// loaded into memory (historical behavior). When true (and
// malloc_usable_size is available and format_version >= 5), then Bloom
// filters are generated to "round up" and "round down" their sizes to
// minimize internal fragmentation when loaded into memory, assuming the
// reading DB has the same memory allocation characteristics as the
// generating DB.
//
// While individual filters will vary in bits/key and false positive rate
// when setting is true, the implementation attempts to maintain a weighted
// average FP rate for filters consistent with this option set to false.
//
// With Jemalloc for example, this setting is expected to save about 10% of
// the memory footprint and block cache charge of filters, while increasing
// disk usage of filters by about 1-2% due to encoding efficiency losses
// with variance in bits/key.
//
// Because some memory counted by block cache might be unmapped pages within
// internal fragmentation, this option can increase observed RSS memory
// usage. With cache_index_and_filter_blocks=true, this option makes the
// block cache better at using space it is allowed.
bool optimize_filters_for_memory = false;

// Use delta encoding to compress keys in blocks.
// ReadOptions::pin_data requires this option to be disabled.
//
Expand Down
1 change: 1 addition & 0 deletions options/options_settable_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,7 @@ TEST_F(OptionsSettableTest, BlockBasedTableOptionsAllFieldsSettable) {
"block_size_deviation=8;block_restart_interval=4; "
"metadata_block_size=1024;"
"partition_filters=false;"
"optimize_filters_for_memory=true;"
"index_block_restart_interval=4;"
"filter_policy=bloomfilter:4:true;whole_key_filtering=1;"
"format_version=1;"
Expand Down
4 changes: 4 additions & 0 deletions table/block_based/block_based_table_factory.cc
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,10 @@ static std::unordered_map<std::string, OptionTypeInfo>
{offsetof(struct BlockBasedTableOptions, partition_filters),
OptionType::kBoolean, OptionVerificationType::kNormal,
OptionTypeFlags::kNone, 0}},
{"optimize_filters_for_memory",
{offsetof(struct BlockBasedTableOptions, optimize_filters_for_memory),
OptionType::kBoolean, OptionVerificationType::kNormal,
OptionTypeFlags::kNone, 0}},
{"filter_policy",
{offsetof(struct BlockBasedTableOptions, filter_policy),
OptionType::kUnknown, OptionVerificationType::kByNameAllowFromNull,
Expand Down
187 changes: 157 additions & 30 deletions table/block_based/filter_policy.cc
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,12 @@ namespace {
// See description in FastLocalBloomImpl
class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
public:
explicit FastLocalBloomBitsBuilder(const int millibits_per_key)
// Non-null aggregate_rounding_balance implies optimize_filters_for_memory
explicit FastLocalBloomBitsBuilder(
const int millibits_per_key,
std::atomic<int64_t>* aggregate_rounding_balance)
: millibits_per_key_(millibits_per_key),
num_probes_(FastLocalBloomImpl::ChooseNumProbes(millibits_per_key_)) {
aggregate_rounding_balance_(aggregate_rounding_balance) {
assert(millibits_per_key >= 1000);
}

Expand All @@ -48,33 +51,36 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}

virtual Slice Finish(std::unique_ptr<const char[]>* buf) override {
size_t num_entry = hash_entries_.size();
std::unique_ptr<char[]> mutable_buf;
uint32_t len_with_metadata =
CalculateSpace(static_cast<uint32_t>(hash_entries_.size()));
char* data = new char[len_with_metadata];
memset(data, 0, len_with_metadata);
CalculateAndAllocate(num_entry, &mutable_buf, /*update_balance*/ true);

assert(data);
assert(mutable_buf);
assert(len_with_metadata >= 5);

// Compute num_probes after any rounding / adjustments
int num_probes = GetNumProbes(num_entry, len_with_metadata);

uint32_t len = len_with_metadata - 5;
if (len > 0) {
AddAllEntries(data, len);
AddAllEntries(mutable_buf.get(), len, num_probes);
}

assert(hash_entries_.empty());

// See BloomFilterPolicy::GetBloomBitsReader re: metadata
// -1 = Marker for newer Bloom implementations
data[len] = static_cast<char>(-1);
mutable_buf[len] = static_cast<char>(-1);
// 0 = Marker for this sub-implementation
data[len + 1] = static_cast<char>(0);
mutable_buf[len + 1] = static_cast<char>(0);
// num_probes (and 0 in upper bits for 64-byte block size)
data[len + 2] = static_cast<char>(num_probes_);
mutable_buf[len + 2] = static_cast<char>(num_probes);
// rest of metadata stays zero

const char* const_data = data;
buf->reset(const_data);
assert(hash_entries_.empty());

return Slice(data, len_with_metadata);
Slice rv(mutable_buf.get(), len_with_metadata);
*buf = std::move(mutable_buf);
return rv;
}

int CalculateNumEntry(const uint32_t bytes) override {
Expand All @@ -84,26 +90,142 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}

uint32_t CalculateSpace(const int num_entry) override {
uint32_t num_cache_lines = 0;
if (millibits_per_key_ > 0 && num_entry > 0) {
num_cache_lines = static_cast<uint32_t>(
(int64_t{num_entry} * millibits_per_key_ + 511999) / 512000);
// NB: the BuiltinFilterBitsBuilder API presumes len fits in uint32_t.
return static_cast<uint32_t>(
CalculateAndAllocate(static_cast<size_t>(num_entry),
/* buf */ nullptr,
/*update_balance*/ false));
}

// To choose size using malloc_usable_size, we have to actually allocate.
uint32_t CalculateAndAllocate(size_t num_entry, std::unique_ptr<char[]>* buf,
bool update_balance) {
std::unique_ptr<char[]> tmpbuf;

// If not for cache line blocks in the filter, what would the target
// length in bytes be?
size_t raw_target_len = static_cast<size_t>(
(uint64_t{num_entry} * millibits_per_key_ + 7999) / 8000);

if (raw_target_len >= size_t{0xffffffc0}) {
// Max supported for this data structure implementation
raw_target_len = size_t{0xffffffc0};
}

// Round up to nearest multiple of 64 (block size). This adjustment is
// used for target FP rate only so that we don't receive complaints about
// lower FP rate vs. historic Bloom filter behavior.
uint32_t target_len =
static_cast<uint32_t>(raw_target_len + 63) & ~uint32_t{63};

// Return value set to a default; overwritten in some cases
uint32_t rv = target_len + /* metadata */ 5;
#ifdef ROCKSDB_MALLOC_USABLE_SIZE
if (aggregate_rounding_balance_ != nullptr) {
// Do optimize_filters_for_memory, using malloc_usable_size.
// Approach: try to keep FP rate balance better than or on
// target (negative aggregate_rounding_balance_). We can then select a
// lower bound filter size (within reasonable limits) that gets us as
// close to on target as possible. We request allocation for that filter
// size and use malloc_usable_size to "round up" to the actual
// allocation size.

// Race condition on balance is OK because it can only cause temporary
// skew in rounding up vs. rounding down, as long as updates are atomic
// and relative.
int64_t balance = aggregate_rounding_balance_->load();

double target_fp_rate = EstimatedFpRate(num_entry, target_len + 5);
double rv_fp_rate = target_fp_rate;

if (balance < 0) {
// See formula for BloomFilterPolicy::aggregate_rounding_balance_
double for_balance_fp_rate =
-balance / double{0x100000000} + target_fp_rate;

// To simplify, we just try a few modified smaller sizes. This also
// caps how much we vary filter size vs. target, to avoid outlier
// behavior from excessive variance.
for (uint64_t maybe_len64 :
{uint64_t{3} * target_len / 4, uint64_t{13} * target_len / 16,
uint64_t{7} * target_len / 8, uint64_t{15} * target_len / 16}) {
uint32_t maybe_len =
static_cast<uint32_t>(maybe_len64) & ~uint32_t{63};
double maybe_fp_rate = EstimatedFpRate(num_entry, maybe_len + 5);
if (maybe_fp_rate <= for_balance_fp_rate) {
rv = maybe_len + /* metadata */ 5;
rv_fp_rate = maybe_fp_rate;
break;
}
}
}

// Filter blocks are loaded into block cache with their block trailer.
// We need to make sure that's accounted for in choosing a
// fragmentation-friendly size.
const uint32_t kExtraPadding = kBlockTrailerSize;

// Allocate and adjust for usable size
tmpbuf.reset(new char[rv + kExtraPadding]);
size_t usable = malloc_usable_size(tmpbuf.get());
if (usable > rv + kExtraPadding) {
size_t usable_len = (usable - kExtraPadding - /* metadata */ 5);
if (usable_len >= size_t{0xffffffc0}) {
// Max supported for this data structure implementation
usable_len = size_t{0xffffffc0};
}
rv = (static_cast<uint32_t>(usable_len) & ~uint32_t{63}) +
/* metadata */ 5;
rv_fp_rate = EstimatedFpRate(num_entry, rv);
} else {
assert(usable == rv + kExtraPadding);
}
memset(tmpbuf.get(), 0, rv);

if (update_balance) {
int64_t diff = static_cast<int64_t>((rv_fp_rate - target_fp_rate) *
double{0x100000000});
*aggregate_rounding_balance_ += diff;
}
}
#endif // ROCKSDB_MALLOC_USABLE_SIZE
if (buf) {
if (tmpbuf) {
*buf = std::move(tmpbuf);
} else {
buf->reset(new char[rv]());
}
}
return num_cache_lines * 64 + /*metadata*/ 5;
return rv;
}

double EstimatedFpRate(size_t keys, size_t bytes) override {
return FastLocalBloomImpl::EstimatedFpRate(keys, bytes - /*metadata*/ 5,
num_probes_, /*hash bits*/ 64);
double EstimatedFpRate(size_t keys, size_t len_with_metadata) override {
int num_probes = GetNumProbes(keys, len_with_metadata);
return FastLocalBloomImpl::EstimatedFpRate(
keys, len_with_metadata - /*metadata*/ 5, num_probes, /*hash bits*/ 64);
}

private:
void AddAllEntries(char* data, uint32_t len) {
// Compute num_probes after any rounding / adjustments
int GetNumProbes(size_t keys, size_t len_with_metadata) {
uint64_t millibits = uint64_t{len_with_metadata - 5} * 8000;
int actual_millibits_per_key =
static_cast<int>(millibits / std::max(keys, size_t{1}));
// BEGIN XXX/TODO(peterd): preserving old/default behavior for now to
// minimize unit test churn. Remove this some time.
if (!aggregate_rounding_balance_) {
actual_millibits_per_key = millibits_per_key_;
}
// END XXX/TODO
return FastLocalBloomImpl::ChooseNumProbes(actual_millibits_per_key);
}

void AddAllEntries(char* data, uint32_t len, int num_probes) {
// Simple version without prefetching:
//
// for (auto h : hash_entries_) {
// FastLocalBloomImpl::AddHash(Lower32of64(h), Upper32of64(h), len,
// num_probes_, data);
// num_probes, data);
// }

const size_t num_entries = hash_entries_.size();
Expand All @@ -129,7 +251,7 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
uint32_t& hash_ref = hashes[i & kBufferMask];
uint32_t& byte_offset_ref = byte_offsets[i & kBufferMask];
// Process (add)
FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes_,
FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes,
data + byte_offset_ref);
// And buffer
uint64_t h = hash_entries_.front();
Expand All @@ -141,13 +263,16 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {

// Finish processing
for (i = 0; i <= kBufferMask && i < num_entries; ++i) {
FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes_,
FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes,
data + byte_offsets[i]);
}
}

// Target allocation per added key, in thousandths of a bit.
int millibits_per_key_;
int num_probes_;
// See BloomFilterPolicy::aggregate_rounding_balance_. If nullptr,
// always "round up" like historic behavior.
std::atomic<int64_t>* aggregate_rounding_balance_;
// A deque avoids unnecessary copying of already-saved values
// and has near-minimal peak memory use.
std::deque<uint64_t> hash_entries_;
Expand Down Expand Up @@ -457,7 +582,7 @@ const std::vector<BloomFilterPolicy::Mode> BloomFilterPolicy::kAllUserModes = {
};

BloomFilterPolicy::BloomFilterPolicy(double bits_per_key, Mode mode)
: mode_(mode), warned_(false) {
: mode_(mode), warned_(false), aggregate_rounding_balance_(0) {
// Sanitize bits_per_key
if (bits_per_key < 1.0) {
bits_per_key = 1.0;
Expand Down Expand Up @@ -549,6 +674,7 @@ FilterBitsBuilder* BloomFilterPolicy::GetFilterBitsBuilder() const {
FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
const FilterBuildingContext& context) const {
Mode cur = mode_;
bool offm = context.table_options.optimize_filters_for_memory;
// Unusual code construction so that we can have just
// one exhaustive switch without (risky) recursion
for (int i = 0; i < 2; ++i) {
Expand All @@ -563,7 +689,8 @@ FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
case kDeprecatedBlock:
return nullptr;
case kFastLocalBloom:
return new FastLocalBloomBitsBuilder(millibits_per_key_);
return new FastLocalBloomBitsBuilder(
millibits_per_key_, offm ? &aggregate_rounding_balance_ : nullptr);
case kLegacyBloom:
if (whole_bits_per_key_ >= 14 && context.info_log &&
!warned_.load(std::memory_order_relaxed)) {
Expand Down
10 changes: 10 additions & 0 deletions table/block_based/filter_policy_internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,16 @@ class BloomFilterPolicy : public FilterPolicy {
// only report once per BloomFilterPolicy instance, to keep the noise down.)
mutable std::atomic<bool> warned_;

// State for implementing optimize_filters_for_memory. Essentially, this
// tracks a surplus or deficit in total FP rate of filters generated by
// builders under this policy vs. what would have been generated without
// optimize_filters_for_memory.
//
// To avoid floating point weirdness, the actual value is
// Sum over all generated filters f:
// (predicted_fp_rate(f) - predicted_fp_rate(f|o_f_f_m=false)) * 2^32
mutable std::atomic<int64_t> aggregate_rounding_balance_;

// For newer Bloom filter implementation(s)
FilterBitsReader* GetBloomBitsReader(const Slice& contents) const;
};
Expand Down

0 comments on commit 15de8d4

Please sign in to comment.