Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
Fix race with concurrent merges and deletes
Change order of scan & fold: now oldest -> youngest

New regression test (currently broken) for recent scan change of oldest -> youngest

Fix rough edges on test/bitcask_qc.erl & test/bitcask_qc_expiry.erl

Write tombstone on delete only if there's a keydir entry, plus race bugfix

Fix remaining bug when bitcask scan/fold order was changed to oldest->youngest

The NIF change fixes a long-standing latent bug: when put'ing a key
that does not exist, if there's a race with a merge, keydir_put_int()
would return 'ok' (error) rather than 'already_exists' (correct).  The
'already_exists' return value is a signal to the read-write owner of
the bitcask that the current append file must be closed and a new one
opened (with a larger fileid than any merge).

The tombstone change adds a new tombstone data format.  Old tombstones
will be handled correctly.  New tombstones for any key K contain
the fileid & offset of the key that it is deleting.  If the fileid
F still exists, then the tombstone will always be merged forward.
If the fileid F does not exist, then merging forward is not
necessary. When F was merged, the on-disk representation of key K
not be merged forward: K does not exist in the keydir (because it
was deleted by this tombstone), or it was replaced by a newer put.

FML: fix tombstone merging bug when K & K's tombstone are in same fileid

Originally found with bitcask_pulse, I deconstructed the test case to
help understand what was happening: the new EUnit test is
new_20131217_a_test_.

As a result of the puts, key #13 is written 3x to fileid #1 (normal,
tombstone, normal) and 1x to fileid #2 (normal @ the very beginning
of the file).  The merge creates fileid #3 and copies only the
tombstone (the normal entry isn't copied because it is out-of-date).
Before the close, the internal keydir contains the correct info
about key #13, but after the close and re-open, we see key #13's
entries: normal (and most recent) in fileid 32, and tombstone in
fileid #3, oops.

The fix is to remove all of the merge input fileids from the set of fileids
that will survive/exist after the merge is finished.

WIP: new test new_20131217_c_test_() is broken, need to experiment with bitcask:delete & NIF usage before continuing

Fix non-determinism in the fold_visits_unfrozen_test test ... now 100% broken

Fix iterator freeze bug demonstrated by last commit: remove kh_put_will_resize() predicate test for creating a keydir->pending

Fix bug introduced by almost-bugfix in commit 1a9c99 that was supposed to be a partial fix for GH #82

Fix unexported function errors, thanks Buildbot

Omnibus bugfixing, including:

* Add 'already_exists' return to bitcask_nifs_keydir_remove(): we need
it to signal races with merge, alas.

* Add state to #filestate to be able to 'undo' last update to both a
data file and its hint file.  This probably means that we're going
to have to play some games with merge file naming, TBD, stay tuned.

* For bitcask:delete(), make the keydir delete conditional: if it fails,
redo the entire thing again.

* inner_merge_write() can have a race that, if a partial merge happens
at the proper time after, we see an old value reappearing.  Fix by
checking the return value of the keydir put, and if 'already_exists',
then undo the write.

* When do_put() has a race and gets 'already_exists' from the keydir,
undo the write before retrying.  If this key is deleted sometime later,
and then a partial merge happens after that, we might see this value
reappear after the merge is done.

* Add file_truncate() to bitcask_file.erl.  TODO: do the same for the
NIF style I/O.

* Better robustness (I hope) to EUnit tests in bitcask_merge_delete.erl

Add UNIX epoch time to put_int NIF call, to avoid C use of time(3).

I hope this will eliminate a nasty source of nondeterminism during
PULSE testing.

Fix buildbot ?TESTDIR definition location problem

Uff da: most PULSE non-determinism problems fixed?

Fix merge race when keydir_put specifies old fileid & offset races with delete

Scenario, with 3 writers, 2 & 3 are racing:

* Writer 1: Put K, write @ {file 1, offset 63}
* Writer 2: Delete operation starts ... but there is no write to disk yet
* Writer 3: Merge scans file 1, sees K @ {1,30} -> not out of date ->
  but there is no write to disk yet
* Writer 2: writes a tombstone @ {3,48}
* Writer 2: Keydir conditional delete @ old location @ {1,63} is ok
* Writer 2: keydir delete returns from NIF-land
* Writer 3: merge copies data from {1, 63} -> {4, 42}
* Writer 3: keydir put {4, 42} conditional on {1,63} succeeds due to
  incorrect conditional validation: the record is gone, but bug
  permits put to return 'ok'.

Fix bug in previous commit: offset=0 is valid, do not check it

Use larger retry # for open_fold_files() and keydir_wait_ready() when PULSE testing

Small bitcask_pulse changes: file size, pretty the keys used, slightly less verbosity

Adjust command frequency weightings for PULSE model

Add no_tombstones_after_reopen_test() to test desired condition

Avoid storing tombstones in the keydir when possible

When a Bitcask is opened and scan_key_files() is reading data
from disk and loading the RAM keydir, we now detect if the key
is a tombstone and, if so, do not store it in the keydir.

Normally, only hint files are scanned during startup.  However,
hint files have not stored enough information to confirm that
a key is/is not a tombstone.  I have added such a flag in a
backward-compatible way: the offset size has been reduced from
64 -> 63 bits, and the uppermost bit (which is assumed to be
0 in all cases -- we assume nobody has actually written a file
bit enough to require 64 bits to describe the offset) is used
to signal tombstone status.

An optional argument was given to the increment_file_id() NIF
to communicate to the NIF that a data file exists ... a fact
that would otherwise be lost if a hint/data file contains
only tombstones.

For testing purposes, fold() and fold_keys() are extended with
another argument to expose the presence of keydir tombstones.

Adjust timeouts and concurrency limits in bitcask_pulse.erl
to avoid the worst of false-positive errors when using the
PULSE model: {badrpc,timeout} nonsense.

Add checks to the PULSE model to try to catch improper tombstones in keydir

Dialyzer fixes

Add failing test: tombstones exist in keydir if hint files are destroyed

Add {tombstone,Key} wrapper when folding .data files; fixes test failure in last commit

Fix intermittent bitcask_merge_worker intermittent timeout failure

NIF review: avoid looking at argv[1] directly

Remove unwanted oopsie foo.erl

Restore put will resize check for multi-fold

Restoring the check that will postpone freezing the keydir until puts
will require a structural change. Without this, the new multifold code
is crippled.

Redo merge/delete race fix

Consolidating retry on merge race logic in do_put, making delete simply
a wrapper around a put(tombstone) operation.
Some cleanup regarding the different tombstone versions

Add missing file truncate op for NIF mode

Plus a little unit test to make sure the file truncation operation
works.
  • Loading branch information
slfritchie authored and engelsanchez committed Mar 7, 2014
1 parent 1596a39 commit 4466fea
Show file tree
Hide file tree
Showing 14 changed files with 943 additions and 224 deletions.
107 changes: 97 additions & 10 deletions c_src/bitcask_nifs.c
Expand Up @@ -37,6 +37,16 @@

#include <stdio.h>

void DEBUG2(const char *fmt, ...) { }
/* #include <stdarg.h> */
/* void DEBUG2(const char *fmt, ...) */
/* { */
/* va_list ap; */
/* va_start(ap, fmt); */
/* vfprintf(stderr, fmt, ap); */
/* va_end(ap); */
/* } */

#ifdef BITCASK_DEBUG
#include <stdarg.h>
#include <ctype.h>
Expand Down Expand Up @@ -226,7 +236,7 @@ typedef struct
uint32_t newest_folder; // Start time for the last keyfolder
uint64_t iter_generation;
uint64_t pending_updated;
uint64_t pending_start; // os:timestamp() as 64-bit integer
uint64_t pending_start; // UNIX epoch seconds (since 1970)
ErlNifPid* pending_awaken; // processes to wake once pending merged into entries
unsigned int pending_awaken_count;
unsigned int pending_awaken_size;
Expand Down Expand Up @@ -277,6 +287,11 @@ typedef struct
#define is_pending_tombstone(e) ((e)->offset == MAX_OFFSET)
#define set_pending_tombstone(e) {(e)->offset = MAX_OFFSET; }

// Use a magic number for signaling that a database is both in read-write
// mode and that we want to do a get while ignoring the iteration status
// of the keydir.
#define MAGIC_OVERRIDE_ITERATING_STATUS 0x42424242

// Atoms (initialized in on_load)
static ERL_NIF_TERM ATOM_ALLOCATION_ERROR;
static ERL_NIF_TERM ATOM_ALREADY_EXISTS;
Expand Down Expand Up @@ -339,6 +354,7 @@ ERL_NIF_TERM bitcask_nifs_file_read(ErlNifEnv* env, int argc, const ERL_NIF_TERM
ERL_NIF_TERM bitcask_nifs_file_write(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]);
ERL_NIF_TERM bitcask_nifs_file_position(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]);
ERL_NIF_TERM bitcask_nifs_file_seekbof(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]);
ERL_NIF_TERM bitcask_nifs_file_truncate(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]);

ERL_NIF_TERM errno_atom(ErlNifEnv* env, int error);
ERL_NIF_TERM errno_error_tuple(ErlNifEnv* env, ERL_NIF_TERM key, int error);
Expand All @@ -357,7 +373,7 @@ static ErlNifFunc nif_funcs[] =
{"keydir_new", 0, bitcask_nifs_keydir_new0},
{"keydir_new", 1, bitcask_nifs_keydir_new1},
{"keydir_mark_ready", 1, bitcask_nifs_keydir_mark_ready},
{"keydir_put_int", 9, bitcask_nifs_keydir_put_int},
{"keydir_put_int", 10, bitcask_nifs_keydir_put_int},
{"keydir_get_int", 3, bitcask_nifs_keydir_get_int},
{"keydir_remove", 3, bitcask_nifs_keydir_remove},
{"keydir_remove_int", 6, bitcask_nifs_keydir_remove},
Expand All @@ -370,6 +386,7 @@ static ErlNifFunc nif_funcs[] =
{"keydir_trim_fstats", 2, bitcask_nifs_keydir_trim_fstats},

{"increment_file_id", 1, bitcask_nifs_increment_file_id},
{"increment_file_id", 2, bitcask_nifs_increment_file_id},

{"lock_acquire_int", 2, bitcask_nifs_lock_acquire},
{"lock_release_int", 1, bitcask_nifs_lock_release},
Expand All @@ -384,7 +401,8 @@ static ErlNifFunc nif_funcs[] =
{"file_read_int", 2, bitcask_nifs_file_read},
{"file_write_int", 2, bitcask_nifs_file_write},
{"file_position_int", 2, bitcask_nifs_file_position},
{"file_seekbof_int", 1, bitcask_nifs_file_seekbof}
{"file_seekbof_int", 1, bitcask_nifs_file_seekbof},
{"file_truncate_int", 1, bitcask_nifs_file_truncate}
};

ERL_NIF_TERM bitcask_nifs_keydir_new0(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[])
Expand Down Expand Up @@ -1114,6 +1132,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
bitcask_keydir_handle* handle;
bitcask_keydir_entry_proxy entry;
ErlNifBinary key;
uint32_t nowsec;
uint32_t newest_put;
uint32_t old_file_id;
uint64_t old_offset;
Expand All @@ -1124,15 +1143,17 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
enif_get_uint(env, argv[3], &(entry.total_sz)) &&
enif_get_uint64_bin(env, argv[4], &(entry.offset)) &&
enif_get_uint(env, argv[5], &(entry.tstamp)) &&
enif_get_uint(env, argv[6], &(newest_put)) &&
enif_get_uint(env, argv[7], &(old_file_id)) &&
enif_get_uint64_bin(env, argv[8], &(old_offset)))
enif_get_uint(env, argv[6], &(nowsec)) &&
enif_get_uint(env, argv[7], &(newest_put)) &&
enif_get_uint(env, argv[8], &(old_file_id)) &&
enif_get_uint64_bin(env, argv[9], &(old_offset)))
{
bitcask_keydir* keydir = handle->keydir;
entry.key = (char*)key.data;
entry.key_sz = key.size;

LOCK(keydir);
DEBUG2("LINE %d put\r\n", __LINE__);

DEBUG_BIN(dbgKey, key.data, key.size);
DEBUG("+++ Put key = %s file_id=%d offset=%d total_sz=%d tstamp=%u old_file_id=%d\r\n",
Expand All @@ -1147,6 +1168,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
// If conditional put and not found, bail early
if (!f.found && old_file_id != 0)
{
DEBUG2("LINE %d put -> already_exists\r\n", __LINE__);
UNLOCK(keydir);
return ATOM_ALREADY_EXISTS;
}
Expand All @@ -1157,11 +1179,26 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
(keydir->pending == NULL))
{
keydir->pending = kh_init(entries);
keydir->pending_start = time(NULL);
keydir->pending_start = nowsec;
}

if (!f.found || f.proxy.is_tombstone)
{
if ((newest_put &&
(entry.file_id < keydir->biggest_file_id)) ||
old_file_id != 0) {
/*
* Really, it doesn't exist. But the atom 'already_exists'
* is also a signal that a merge has incremented the
* keydir->biggest_file_id and that we need to retry this
* operation after Erlang-land has re-written the key & val
* to a new location in the same-or-bigger file id.
*/
DEBUG2("LINE %d put -> already_exists\r\n", __LINE__);
UNLOCK(keydir);
return ATOM_ALREADY_EXISTS;
}

keydir->key_count++;
keydir->key_bytes += key.size;

Expand All @@ -1174,6 +1211,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
DEBUG("+++ Put new\r\n");
DEBUG_KEYDIR(keydir);

DEBUG2("LINE %d put -> ok (!found || !tombstone)\r\n", __LINE__);
UNLOCK(keydir);
return ATOM_OK;
}
Expand All @@ -1184,6 +1222,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
old_offset == f.proxy.offset))
{
DEBUG("++ Conditional not match\r\n");
DEBUG2("LINE %d put -> already_exists/cond bad match\r\n", __LINE__);
UNLOCK(keydir);
return ATOM_ALREADY_EXISTS;
}
Expand Down Expand Up @@ -1220,6 +1259,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
}

put_entry(keydir, &f, &entry);
DEBUG2("LINE %d put -> ok\r\n", __LINE__);
UNLOCK(keydir);
DEBUG("Finished put\r\n");
DEBUG_KEYDIR(keydir);
Expand All @@ -1233,6 +1273,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_put_int(ErlNifEnv* env, int argc, const ERL_NIF
update_fstats(env, keydir, entry.file_id, entry.tstamp,
0, 1, 0, entry.total_sz);
}
DEBUG2("LINE %d put -> already_exists end\r\n", __LINE__);
UNLOCK(keydir);
DEBUG("No update\r\n");
return ATOM_ALREADY_EXISTS;
Expand Down Expand Up @@ -1340,7 +1381,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_remove(ErlNifEnv* env, int argc, const ERL_NIF_
{
UNLOCK(keydir);
DEBUG("+++Conditional no match\r\n");
return ATOM_OK;
return ATOM_ALREADY_EXISTS;
}

// Remove the key from the keydir stats
Expand All @@ -1354,13 +1395,15 @@ ERL_NIF_TERM bitcask_nifs_keydir_remove(ErlNifEnv* env, int argc, const ERL_NIF_
// If found an entry in the pending hash, convert it to a tombstone
if (fr.pending_entry)
{
DEBUG2("LINE %d pending put\r\n", __LINE__);
set_pending_tombstone(fr.pending_entry);
fr.pending_entry->tstamp = remove_time;
}
// If frozen, add tombstone to pending hash (iteration must have
// started between put/remove call in bitcask:delete.
else if (keydir->pending)
{
DEBUG2("LINE %d pending put\r\n", __LINE__);
bitcask_keydir_entry* pending_entry =
add_entry(keydir, keydir->pending, &fr.proxy);
set_pending_tombstone(pending_entry);
Expand All @@ -1384,6 +1427,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_remove(ErlNifEnv* env, int argc, const ERL_NIF_
}
else // not found
{
DEBUG("Not found - not removed\r\n");
UNLOCK(keydir);
return ATOM_OK;;
}
Expand Down Expand Up @@ -1460,6 +1504,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_copy(ErlNifEnv* env, int argc, const ERL_NIF_TE
}
if (keydir->pending != NULL)
{
DEBUG2("LINE %d pending copy\r\n", __LINE__);
for (itr = kh_begin(keydir->pending); itr != kh_end(keydir->pending); ++itr)
{
// Allocate our entry to be inserted into the new table and copy the record
Expand Down Expand Up @@ -1509,14 +1554,17 @@ static int can_itr_keydir(bitcask_keydir* keydir, uint64_t ts, int maxage, int m
if (keydir->pending == NULL || // not frozen or caller wants to reuse
(maxage < 0 && maxputs < 0)) // the exiting freeze
{
DEBUG2("LINE %d can_itr\r\n", __LINE__);
return 1;
}
else if (ts == 0 || ts < keydir->pending_start)
{ // if clock skew (or forced wait), force key folding to wait
DEBUG2("LINE %d can_itr\r\n", __LINE__);
return 0; // which will fix keydir->pending_start
}
else
{
DEBUG2("LINE %d can_itr\r\n", __LINE__);
uint64_t age = ts - keydir->pending_start;
return ((maxage < 0 || age <= maxage) &&
(maxputs < 0 || keydir->pending_updated <= maxputs));
Expand Down Expand Up @@ -1559,6 +1607,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_itr(ErlNifEnv* env, int argc, const ERL_NIF_TER
keydir->newest_folder = ts;
keydir->keyfolders++;
handle->iterator = kh_begin(keydir->entries);
DEBUG2("LINE %d itr started, keydir->pending = 0x%lx\r\n", __LINE__, keydir->pending);
UNLOCK(handle->keydir);
return ATOM_OK;
}
Expand All @@ -1580,6 +1629,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_itr(ErlNifEnv* env, int argc, const ERL_NIF_TER
}
enif_self(env, &keydir->pending_awaken[keydir->pending_awaken_count]);
keydir->pending_awaken_count++;
DEBUG2("LINE %d itr\r\n", __LINE__);
UNLOCK(handle->keydir);
return ATOM_OUT_OF_DATE;
}
Expand Down Expand Up @@ -1612,6 +1662,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_itr_next(ErlNifEnv* env, int argc, const ERL_NI
{
if (kh_exist(keydir->entries, handle->iterator))
{
DEBUG2("LINE %d itr_next\r\n", __LINE__);
bitcask_keydir_entry* entry = kh_key(keydir->entries, handle->iterator);
ErlNifBinary key;
bitcask_keydir_entry_proxy proxy;
Expand Down Expand Up @@ -1690,6 +1741,7 @@ ERL_NIF_TERM bitcask_nifs_keydir_itr_release(ErlNifEnv* env, int argc, const ERL
// If last iterator closing, unfreeze keydir and merge pending entries.
if (handle->keydir->keyfolders == 0 && handle->keydir->pending != NULL)
{
DEBUG2("LINE %d itr_release\r\n", __LINE__);
merge_pending_entries(env, handle->keydir);
handle->keydir->iter_generation++;
}
Expand Down Expand Up @@ -1781,12 +1833,22 @@ ERL_NIF_TERM bitcask_nifs_keydir_release(ErlNifEnv* env, int argc, const ERL_NIF
ERL_NIF_TERM bitcask_nifs_increment_file_id(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[])
{
bitcask_keydir_handle* handle;
uint32_t conditional_file_id = 0;

if (enif_get_resource(env, argv[0], bitcask_keydir_RESOURCE, (void**)&handle))
{

if (argc == 2) {
enif_get_uint(env, argv[1], &(conditional_file_id));
}
LOCK(handle->keydir);
(handle->keydir->biggest_file_id)++;
if (conditional_file_id == 0) {
(handle->keydir->biggest_file_id)++;
} else {
if (conditional_file_id > handle->keydir->biggest_file_id) {
handle->keydir->biggest_file_id = conditional_file_id;
}
}
uint32_t id = handle->keydir->biggest_file_id;
UNLOCK(handle->keydir);
return enif_make_tuple2(env, ATOM_OK, enif_make_uint(env, id));
Expand Down Expand Up @@ -2289,7 +2351,7 @@ ERL_NIF_TERM bitcask_nifs_file_seekbof(ErlNifEnv* env, int argc, const ERL_NIF_T

if (enif_get_resource(env, argv[0], bitcask_file_RESOURCE, (void**)&handle))
{
if (lseek(handle->fd, 0, SEEK_SET) != -1)
if (lseek(handle->fd, 0, SEEK_SET) != (off_t)-1)
{
return ATOM_OK;
}
Expand All @@ -2305,6 +2367,30 @@ ERL_NIF_TERM bitcask_nifs_file_seekbof(ErlNifEnv* env, int argc, const ERL_NIF_T
}
}

ERL_NIF_TERM bitcask_nifs_file_truncate(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[])
{
bitcask_file_handle* handle;

if (enif_get_resource(env, argv[0], bitcask_file_RESOURCE, (void**)&handle))
{
off_t ofs = lseek(handle->fd, 0, SEEK_CUR);
if (ofs == (off_t)-1)
{
return enif_make_tuple2(env, ATOM_ERROR, errno_atom(env, errno));
}

if (ftruncate(handle->fd, ofs) == -1)
{
return errno_error_tuple(env, ATOM_FTRUNCATE_ERROR, errno);
}

return ATOM_OK;
}
else
{
return enif_make_badarg(env);
}
}

ERL_NIF_TERM errno_atom(ErlNifEnv* env, int error)
{
Expand Down Expand Up @@ -2413,6 +2499,7 @@ static void merge_pending_entries(ErlNifEnv* env, bitcask_keydir* keydir)

// Free all resources for keydir folding
kh_destroy(entries, keydir->pending);
DEBUG2("LINE %d keydir->pending = NULL\r\n", __LINE__);
keydir->pending = NULL;

keydir->pending_updated = 0;
Expand Down
21 changes: 16 additions & 5 deletions include/bitcask.hrl
Expand Up @@ -13,7 +13,10 @@
fd, % File handle
hintfd, % File handle for hints
hintcrc=0,% CRC-32 of current hint
ofs }). % Current offset for writing
ofs, % Current offset for writing
l_ofs=0, % Last offset written to data file
l_hbytes=0,% Last # bytes written to hint file
l_hintcrc=0}). % CRC-32 of current hint prior to last write

-record(file_status, { filename,
fragmented,
Expand All @@ -25,9 +28,17 @@

-define(FMT(Str, Args), lists:flatten(io_lib:format(Str, Args))).

-define(TOMBSTONE, <<"bitcask_tombstone">>).

-define(OFFSETFIELD, 64).
-define(TOMBSTONE_V1, <<"bitcask_tombstone">>).
-define(TOMBSTONE_V1_SIZE, size(?TOMBSTONE_V1)).
-define(TOMBSTONE_V2_STR, "bitcask_tombstone2").
-define(TOMBSTONE_V2,<<?TOMBSTONE_V2_STR>>).
% Size of tombstone2 + file id + offset
-define(TOMBSTONE_V2_SIZE, (size(?TOMBSTONE_V2)+8+4)).
-define(MAX_TOMBSTONE_SIZE, ?TOMBSTONE_V2_SIZE).

-define(OFFSETFIELD_V1, 64).
-define(TOMBSTONEFIELD_V2, 1).
-define(OFFSETFIELD_V2, 63).
-define(TSTAMPFIELD, 32).
-define(KEYSIZEFIELD, 16).
-define(TOTALSIZEFIELD, 32).
Expand All @@ -36,7 +47,7 @@
-define(HEADER_SIZE, 14). % 4 + 4 + 2 + 4 bytes
-define(MAXKEYSIZE, 2#1111111111111111).
-define(MAXVALSIZE, 2#11111111111111111111111111111111).
-define(MAXOFFSET, 16#ffffffffffffffff). % max 64-bit unsigned
-define(MAXOFFSET_V2, 16#7fffffffffffffff). % max 63-bit unsigned

%% for hintfile validation
-define(CHUNK_SIZE, 65535).
Expand Down
4 changes: 3 additions & 1 deletion rebar.config.script
Expand Up @@ -12,7 +12,7 @@ case PulseBuild of
[ {bitcask_nifs, keydir_new, 0}
, {bitcask_nifs, keydir_new, 1}
, {bitcask_nifs, keydir_mark_ready, 1}
, {bitcask_nifs, keydir_put_int, 9}
, {bitcask_nifs, keydir_put_int, 10}
, {bitcask_nifs, keydir_get_int, 4}
, {bitcask_nifs, keydir_remove, 3}
, {bitcask_nifs, keydir_remove_int, 6}
Expand All @@ -25,6 +25,7 @@ case PulseBuild of
, {bitcask_nifs, keydir_trim_fstats, 2}

, {bitcask_nifs, increment_file_id, 1}
, {bitcask_nifs, increment_file_id, 2}

, {bitcask_nifs, lock_acquire_int, 2}
, {bitcask_nifs, lock_release_int, 1}
Expand All @@ -41,6 +42,7 @@ case PulseBuild of
, {bitcask_nifs, file_position_int, 2}
, {bitcask_nifs, file_seekbof_int, 1}

, {bitcask_file, '_', '_'}
, {bitcask_time, tstamp, 0}

, {prim_file, '_', '_'}
Expand Down

0 comments on commit 4466fea

Please sign in to comment.