Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Rename/refactor a bunch of cuttlefish settings. #127

Merged
merged 2 commits into from

2 participants

@seancribbs
Owner

Also improved the docs and employed the @see option for multi_backend settings. For details about the changes, see f1dcf6e.

J4/CV/JD/SC and others added some commits
J4/CV/JD/SC [confbal] Rename and redesign a bunch of cuttlefish settings, improve…
… docs.

* Setting name is removed from all docstrings. The name can be
  generated from the schema and will be displayed when using the
  "describe" cuttlefish command.
* sync_strategy -> sync.strategy; sync_interval -> sync.interval
* merge_window -> merge.policy -- There is no window if set to
  'always' or 'never', so "policy" seems a better name.
* merge_window.start/end -> merge.window.start/end
* frag_merge_trigger -> merge.triggers.fragmentation (more
  descriptive). The "is_percentage" validator was also added to
  constrain the values.
* dead_bytes_merge_trigger -> merge.triggers.dead_bytes (similar to
  frag_merge_trigger).
* frag_threshold -> thresholds.fragmentation (again adding
  "is_percentage" validator)
* dead_bytes_threshold -> thresholds.dead_bytes
* small_file_threshold -> thresholds.small_file
* max_fold_age -> fold.max_age. Added the possible value of
  `unlimited` (replaces -1 magic value) as well as a duration in
  milliseconds. The input value is upscaled to microseconds, but we
  have no evidence of anyone setting this setting. Probably best to
  change it in advanced.config.
* max_fold_puts -> fold.max_puts. Added value of `unlimited` which
  replaces the magic value of -1.
* Added `off` to expiry, which disables the feature (instead of the -1
  magic value).
* require_hint_crc -> hintfile_checksums. Changed settings to `strict`
  and `allow_missing` instead of true/false.
* expiry_grace_time -> expiry.grace_time
* multi_backend settings were converted to use @see tags instead of
  copying the docstrings.
f1dcf6e
@seancribbs seancribbs Update schema tests for renamed settings. 348a13e
@slfritchie
Owner

Hi, all. Sorry I overlooked this PR during this week's merge madness. Looks nice, and doesn't appear to upset any existing riak_test scripts, as far as I can tell from today's 'master' r_t repo. +1, I'll merge it, thanks!

@slfritchie slfritchie merged commit a00bfd8 into from
@seancribbs seancribbs deleted the branch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Dec 13, 2013
  1. @seancribbs

    [confbal] Rename and redesign a bunch of cuttlefish settings, improve…

    J4/CV/JD/SC authored seancribbs committed
    … docs.
    
    * Setting name is removed from all docstrings. The name can be
      generated from the schema and will be displayed when using the
      "describe" cuttlefish command.
    * sync_strategy -> sync.strategy; sync_interval -> sync.interval
    * merge_window -> merge.policy -- There is no window if set to
      'always' or 'never', so "policy" seems a better name.
    * merge_window.start/end -> merge.window.start/end
    * frag_merge_trigger -> merge.triggers.fragmentation (more
      descriptive). The "is_percentage" validator was also added to
      constrain the values.
    * dead_bytes_merge_trigger -> merge.triggers.dead_bytes (similar to
      frag_merge_trigger).
    * frag_threshold -> thresholds.fragmentation (again adding
      "is_percentage" validator)
    * dead_bytes_threshold -> thresholds.dead_bytes
    * small_file_threshold -> thresholds.small_file
    * max_fold_age -> fold.max_age. Added the possible value of
      `unlimited` (replaces -1 magic value) as well as a duration in
      milliseconds. The input value is upscaled to microseconds, but we
      have no evidence of anyone setting this setting. Probably best to
      change it in advanced.config.
    * max_fold_puts -> fold.max_puts. Added value of `unlimited` which
      replaces the magic value of -1.
    * Added `off` to expiry, which disables the feature (instead of the -1
      magic value).
    * require_hint_crc -> hintfile_checksums. Changed settings to `strict`
      and `allow_missing` instead of true/false.
    * expiry_grace_time -> expiry.grace_time
    * multi_backend settings were converted to use @see tags instead of
      copying the docstrings.
Commits on Dec 14, 2013
  1. @seancribbs
This page is out of date. Refresh to see the latest.
Showing with 249 additions and 279 deletions.
  1. +228 −261 priv/bitcask.schema
  2. +21 −18 test/bitcask_schema_tests.erl
View
489 priv/bitcask.schema
@@ -2,52 +2,57 @@
%%%% bitcask
-%% @doc bitcask data root
+%% @doc A path under which bitcask data files will be stored.
{mapping, "bitcask.data_root", "bitcask.data_root", [
{default, "{{platform_data_dir}}/bitcask"}
]}.
-%% @doc The open_timeout setting specifies the maximum time Bitcask will
-%% block on startup while attempting to create or open the data directory.
-%% The value is in seconds and the default is 4. You generally need not
-%% change this value. If for some reason the timeout is exceeded on open
-%% you'll see a log message of the form:
-%% "Failed to start bitcask backend: .... "
-%% Only then should you consider a longer timeout.
+%% @doc Specifies the maximum time Bitcask will block on startup while
+%% attempting to create or open the data directory. You generally need
+%% not change this value. If for some reason the timeout is exceeded
+%% on open you'll see a log message of the form: "Failed to start
+%% bitcask backend: .... " Only then should you consider a longer
+%% timeout.
{mapping, "bitcask.open_timeout", "bitcask.open_timeout", [
- {default, 4},
- {datatype, integer},
+ {default, "4s"},
+ {datatype, {duration, s}},
{level, advanced}
]}.
-%% @doc The `sync_strategy` setting changes the durability of writes by specifying
-%% when to synchronize data to disk. The default setting protects against data
-%% loss in the event of application failure (process death) but leaves open a
-%% small window wherein data could be lost in the event of complete system
-%% failure (e.g. hardware, O/S, power).
+%% @doc Changes the durability of writes by specifying when to
+%% synchronize data to disk. The default setting protects against data
+%% loss in the event of application failure (process death) but leaves
+%% open a small window wherein data could be lost in the event of
+%% complete system failure (e.g. hardware, O/S, power).
%%
-%% The default mode, `none`, writes data into operating system buffers which
-%% which will be written to the disks when those buffers are flushed by the
-%% operating system. If the system fails (power loss, crash, etc.) before
-%% before those buffers are flushed to stable storage that data is lost.
+%% The default mode, `none`, writes data into operating system buffers
+%% which which will be written to the disks when those buffers are
+%% flushed by the operating system. If the system fails (power loss,
+%% crash, etc.) before before those buffers are flushed to stable
+%% storage that data is lost.
%%
-%% This is prevented by the setting `o_sync` which forces the operating system
-%% to flush to stable storage at every write. The effect of flushing each
-%% write is better durability, however write throughput will suffer as each
-%% write will have to wait for the write to complete.
+%% This is prevented by the setting `o_sync` which forces the
+%% operating system to flush to stable storage at every write. The
+%% effect of flushing each write is better durability, however write
+%% throughput will suffer as each write will have to wait for the
+%% write to complete.
%%
-%% ___Available Sync Strategies___
+%% Available Sync Strategies:
%%
-%% * `none` - (default) Lets the operating system manage syncing writes.
-%% * `o_sync` - Uses the O_SYNC flag which forces syncs on every write.
-%% * `interval` - Riak will force Bitcask to sync every `bitcask.sync_interval` seconds.
-{mapping, "bitcask.sync_strategy", "bitcask.sync_strategy", [
+%% * `none` - (default) Lets the operating system manage syncing
+%% writes.
+%% * `o_sync` - Uses the O_SYNC flag which forces syncs on every
+%% write.
+%% * `interval` - Riak will force Bitcask to sync every
+%% `bitcask.sync.interval` seconds.
+{mapping, "bitcask.sync.strategy", "bitcask.sync_strategy", [
{default, none},
{datatype, {enum, [none, o_sync, interval]}},
{level, advanced}
]}.
-{mapping, "bitcask.sync_interval", "bitcask.sync_strategy", [
+%% @see bitcask.sync.strategy
+{mapping, "bitcask.sync.interval", "bitcask.sync_strategy", [
{datatype, {duration, s}},
{level, advanced}
]}.
@@ -55,20 +60,20 @@
{translation,
"bitcask.sync_strategy",
fun(Conf) ->
- Setting = cuttlefish:conf_get("bitcask.sync_strategy", Conf),
+ Setting = cuttlefish:conf_get("bitcask.sync.strategy", Conf),
case Setting of
none -> none;
o_sync -> o_sync;
interval ->
- Interval = cuttlefish:conf_get("bitcask.sync_interval", Conf, undefined),
- {seconds, Interval};
+ Interval = cuttlefish:conf_get("bitcask.sync.interval", Conf, undefined),
+ {seconds, Interval};
_Default -> none
end
end}.
-%% @doc The `max_file_size` setting describes the maximum permitted size for any
-%% single data file in the Bitcask directory. If a write causes the current
-%% file to exceed this size threshold then that file is closed, and a new file
+%% @doc Describes the maximum permitted size for any single data file
+%% in the Bitcask directory. If a write causes the current file to
+%% exceed this size threshold then that file is closed, and a new file
%% is opened for writes.
{mapping, "bitcask.max_file_size", "bitcask.max_file_size", [
{default, "2GB"},
@@ -77,31 +82,33 @@
]}.
-%% @doc The `merge_window` setting lets you specify when during the day merge
-%% operations are allowed to be triggered. Valid options are:
+%% @doc Lets you specify when during the day merge operations are
+%% allowed to be triggered. Valid options are:
%%
%% * `always` (default) No restrictions
%% * `never` Merge will never be attempted
%% * `window` Hours during which merging is permitted, where
-%% `bitcask.merge_window.start` and
-%% `bitcask.merge_window.end` are integers between 0 and 23.
+%% `bitcask.merge.window.start` and `bitcask.merge.window.end` are
+%% integers between 0 and 23.
%%
-%% If merging has a significant impact on performance of your cluster, or your
-%% cluster has quiet periods in which little storage activity occurs, you may
-%% want to change this setting from the default.
-{mapping, "bitcask.merge_window", "bitcask.merge_window", [
+%% If merging has a significant impact on performance of your cluster,
+%% or your cluster has quiet periods in which little storage activity
+%% occurs, you may want to change this setting from the default.
+{mapping, "bitcask.merge.policy", "bitcask.merge_window", [
{default, always},
{datatype, {enum, [always, never, window]}},
{level, advanced}
]}.
-{mapping, "bitcask.merge_window.start", "bitcask.merge_window", [
+%% @see bitcask.merge.policy
+{mapping, "bitcask.merge.window.start", "bitcask.merge_window", [
{default, 0},
{datatype, integer},
{level, advanced}
]}.
-{mapping, "bitcask.merge_window.end", "bitcask.merge_window", [
+%% @see bitcask.merge.policy
+{mapping, "bitcask.merge.window.end", "bitcask.merge_window", [
{default, 23},
{datatype, integer},
{level, advanced}
@@ -111,138 +118,196 @@
{translation,
"bitcask.merge_window",
fun(Conf) ->
- Setting = cuttlefish:conf_get("bitcask.merge_window", Conf),
+ Setting = cuttlefish:conf_get("bitcask.merge.policy", Conf),
case Setting of
always -> always;
never -> never;
window ->
- Start = cuttlefish:conf_get("bitcask.merge_window.start", Conf, undefined),
- End = cuttlefish:conf_get("bitcask.merge_window.end", Conf, undefined),
- {Start, End};
+ Start = cuttlefish:conf_get("bitcask.merge.window.start", Conf, undefined),
+ End = cuttlefish:conf_get("bitcask.merge.window.end", Conf, undefined),
+ {Start, End};
_Default -> always
end
end}.
-%% @doc `frag_merge_trigger` setting describes what ratio of
-%% dead keys to total keys in a file will trigger merging. The value of this
-%% setting is a percentage (0-100). For example, if a data file contains 6
-%% dead keys and 4 live keys, then merge will be triggered at the default
-%% setting. Increasing this value will cause merging to occur less often,
-%% whereas decreasing the value will cause merging to happen more often.
+%% @doc Describes what ratio of dead keys to total keys in a file will
+%% trigger merging. The value of this setting is a percentage
+%% (0-100). For example, if a data file contains 6 dead keys and 4
+%% live keys, then merge will be triggered at the default
+%% setting. Increasing this value will cause merging to occur less
+%% often, whereas decreasing the value will cause merging to happen
+%% more often.
%%
%% Default is: `60`
-{mapping, "bitcask.frag_merge_trigger", "bitcask.frag_merge_trigger", [
+{mapping, "bitcask.merge.triggers.fragmentation",
+ "bitcask.frag_merge_trigger",
+ [
{datatype, integer},
{level, advanced},
- {default, 60}
+ {default, 60},
+ {validators, ["is_percentage"]}
]}.
+{validator,
+ "is_percentage",
+ "must be a percentage",
+ fun(Value) ->
+ Value >= 0 andalso Value =< 100
+ end}.
-%% @doc `dead_bytes_merge_trigger` setting describes how much
-%% data stored for dead keys in a single file will trigger merging. The
-%% value is in bytes. If a file meets or exceeds the trigger value for dead
-%% bytes, merge will be triggered. Increasing the value will cause merging
-%% to occur less often, whereas decreasing the value will cause merging to
-%% happen more often.
+%% @doc Describes how much data stored for dead keys in a single file
+%% will trigger merging. The value is in bytes. If a file meets or
+%% exceeds the trigger value for dead bytes, merge will be
+%% triggered. Increasing the value will cause merging to occur less
+%% often, whereas decreasing the value will cause merging to happen
+%% more often.
%%
-%% When either of these constraints are met by any file in the directory,
-%% Bitcask will attempt to merge files.
+%% When either of these constraints are met by any file in the
+%% directory, Bitcask will attempt to merge files.
%%
-%% Default is: 512MB in bytes
-{mapping, "bitcask.dead_bytes_merge_trigger", "bitcask.dead_bytes_merge_trigger", [
+%% Default is: 512MB
+{mapping, "bitcask.merge.triggers.dead_bytes",
+ "bitcask.dead_bytes_merge_trigger",
+ [
{datatype, bytesize},
{level, advanced},
{default, "512MB"}
]}.
-%% @doc `frag_threshold` setting describes what ratio of
-%% dead keys to total keys in a file will cause it to be included in the
-%% merge. The value of this setting is a percentage (0-100). For example,
-%% if a data file contains 4 dead keys and 6 live keys, it will be included
-%% in the merge at the default ratio. Increasing the value will cause fewer
-%% files to be merged, decreasing the value will cause more files to be
-%% merged.
+%% @doc Describes what ratio of dead keys to total keys in a file will
+%% cause it to be included in the merge. The value of this setting is
+%% a percentage (0-100). For example, if a data file contains 4 dead
+%% keys and 6 live keys, it will be included in the merge at the
+%% default ratio. Increasing the value will cause fewer files to be
+%% merged, decreasing the value will cause more files to be merged.
%%
%% Default is: `40`
-{mapping, "bitcask.frag_threshold", "bitcask.frag_threshold", [
+{mapping, "bitcask.merge.thresholds.fragmentation",
+ "bitcask.frag_threshold",
+[
{datatype, integer},
{level, advanced},
- {default, 40}
+ {default, 40},
+ {validators, ["is_percentage"]}
]}.
-%% @doc `dead_bytes_threshold` setting describes the minimum
-%% amount of data occupied by dead keys in a file to cause it to be included
-%% in the merge. Increasing the value will cause fewer files to be merged,
-%% decreasing the value will cause more files to be merged.
+%% @doc Describes the minimum amount of data occupied by dead keys in
+%% a file to cause it to be included in the merge. Increasing the
+%% value will cause fewer files to be merged, decreasing the value
+%% will cause more files to be merged.
%%
-%% Default is: 128MB in bytes
-{mapping, "bitcask.dead_bytes_threshold", "bitcask.dead_bytes_threshold", [
+%% Default is: 128MB
+{mapping, "bitcask.merge.thresholds.dead_bytes",
+ "bitcask.dead_bytes_threshold", [
{datatype, bytesize},
{level, advanced},
{default, "128MB"}
]}.
-%% @doc `small_file_threshold` setting describes the minimum
-%% size a file must have to be _excluded_ from the merge. Files smaller
-%% than the threshold will be included. Increasing the value will cause
-%% _more_ files to be merged, decreasing the value will cause _fewer_ files
-%% to be merged.
+%% @doc Describes the minimum size a file must have to be _excluded_
+%% from the merge. Files smaller than the threshold will be
+%% included. Increasing the value will cause _more_ files to be
+%% merged, decreasing the value will cause _fewer_ files to be merged.
%%
-%% Default is: 10MB in bytes
-{mapping, "bitcask.small_file_threshold", "bitcask.small_file_threshold", [
+%% Default is: 10MB
+{mapping, "bitcask.merge.thresholds.small_file",
+ "bitcask.small_file_threshold", [
{datatype, bytesize},
{level, advanced},
{default, "10MB"}
]}.
-%% @doc Fold keys thresholds will reuse the keydir if another fold was started less
-%% than `max_fold_age` ago and there were less than `max_fold_puts` updates.
-%% Otherwise it will wait until all current fold keys complete and then start.
-%% Set either option to -1 to disable.
-%% Age in micro seconds (-1 means "unlimited")
-{mapping, "bitcask.max_fold_age", "bitcask.max_fold_age", [
- {datatype, integer},
+%% @doc Fold keys thresholds will reuse the keydir if another fold was
+%% started less than `fold.max_age` ago and there were less than
+%% `fold.max_puts` updates. Otherwise it will wait until all current
+%% fold keys complete and then start. Set either option to unlimited
+%% to disable.
+{mapping, "bitcask.fold.max_age", "bitcask.max_fold_age", [
+ {datatype, [{atom, unlimited}, {duration, ms}]},
{level, advanced},
- {default, -1}
+ {default, unlimited}
]}.
-{mapping, "bitcask.max_fold_puts", "bitcask.max_fold_puts", [
- {datatype, integer},
+{translation, "bitcask.max_fold_age",
+ fun(Conf) ->
+ case cuttlefish:conf_get("bitcask.fold.max_age", Conf) of
+ unlimited -> -1;
+ I when is_integer(I) ->
+ %% app.config expects microseconds
+ I * 1000;
+ _ -> -1 %% The default, for safety
+ end
+ end
+}.
+
+%% @see bitcask.fold.max_age
+{mapping, "bitcask.fold.max_puts", "bitcask.max_fold_puts", [
+ {datatype, [integer, {atom, unlimited}]},
{level, advanced},
{default, 0}
]}.
-%% @doc By default, Bitcask keeps all of your data around. If your data has
-%% limited time-value, or if for space reasons you need to purge data, you can
-%% set the `expiry_secs` option. If you needed to purge data automatically
-%% after 1 day, set the value to `1d`.
+{translation, "bitcask.max_fold_puts",
+ fun(Conf) ->
+ case cuttlefish:conf_get("bitcask.fold.max_puts", Conf) of
+ unlimited -> -1;
+ I when is_integer(I) -> I;
+ _ -> 0 %% default catch
+ end
+ end
+}.
+
+%% @doc By default, Bitcask keeps all of your data around. If your
+%% data has limited time-value, or if for space reasons you need to
+%% purge data, you can set the `expiry` option. If you needed to
+%% purge data automatically after 1 day, set the value to `1d`.
%%
-%% Default is: `-1` which disables automatic expiration
+%% Default is: `off` which disables automatic expiration
{mapping, "bitcask.expiry", "bitcask.expiry_secs", [
- {datatype, {duration, s}},
+ {datatype, [{atom, off}, {duration, s}]},
{level, advanced},
- {default, -1}
+ {default, off}
]}.
+{translation, "bitcask.expiry_secs",
+ fun(Conf) ->
+ case cuttlefish:conf_get("bitcask.expiry", Conf) of
+ off -> -1;
+ I when is_integer(I) -> I;
+ _ -> -1
+ end
+ end
+}.
%% @doc Require the CRC to be present at the end of hintfiles.
-%% Setting this to false runs Bitcask in a backward compatible mode
-%% where old hint files will still be accepted without CRC signatures.
-{mapping, "bitcask.require_hint_crc", "bitcask.require_hint_crc", [
- {default, true},
- {datatype, {enum, [true, false]}},
+%% Setting this to `allow_missing` runs Bitcask in a backward
+%% compatible mode where old hint files will still be accepted without
+%% CRC signatures.
+{mapping, "bitcask.hintfile_checksums", "bitcask.require_hint_crc", [
+ {default, strict},
+ {datatype, {enum, [strict, allow_missing]}},
{level, advanced}
]}.
-%% By default, Bitcask will trigger a merge whenever a data file contains
-%% an expired key. This may result in excessive merging under some usage
-%% patterns. To prevent this you can set the `expiry_grace_time` option.
-%% Bitcask will defer triggering a merge solely for key expiry by the
-%% configured number of seconds. Setting this to `1h` effectively limits
-%% each cask to merging for expiry once per hour.
+{translation, "bitcask.require_hint_crc",
+ fun(Conf) ->
+ case cuttlefish:conf_get("bitcask.hintfile_checksums", Conf) of
+ strict -> true;
+ allow_missing -> false;
+ _ -> true
+ end
+ end}.
+
+%% @doc By default, Bitcask will trigger a merge whenever a data file
+%% contains an expired key. This may result in excessive merging under
+%% some usage patterns. To prevent this you can set the
+%% `bitcask.expiry.grace_time` option. Bitcask will defer triggering
+%% a merge solely for key expiry by the configured number of
+%% seconds. Setting this to `1h` effectively limits each cask to
+%% merging for expiry once per hour.
%%
%% Default is: `0`
-{mapping, "bitcask.expiry_grace_time", "bitcask.expiry_grace_time", [
+{mapping, "bitcask.expiry.grace_time", "bitcask.expiry_grace_time", [
{datatype, {duration, s}},
{level, advanced},
{default, 0}
@@ -261,61 +326,32 @@
{datatype, {enum, [erlang, nif]}}
]}.
-%% @doc bitcask data root
+%% @see bitcask.data_root
{mapping, "multi_backend.$name.bitcask.data_root", "riak_kv.multi_backend", [
{level, advanced}
]}.
-
-%% @doc The open_timeout setting specifies the maximum time Bitcask will
-%% block on startup while attempting to create or open the data directory.
-%% The value is in seconds and the default is 4. You generally need not
-%% change this value. If for some reason the timeout is exceeded on open
-%% you'll see a log message of the form:
-%% "Failed to start bitcask backend: .... "
-%% Only then should you consider a longer timeout.
+%% @see bitcask.open_timeout
{mapping, "multi_backend.$name.bitcask.open_timeout", "riak_kv.multi_backend", [
- {default, 4},
- {datatype, integer},
+ {default, "4s"},
+ {datatype, {duration, s}},
{level, advanced}
]}.
-%% @doc The `sync_strategy` setting changes the durability of writes by specifying
-%% when to synchronize data to disk. The default setting protects against data
-%% loss in the event of application failure (process death) but leaves open a
-%% small window wherein data could be lost in the event of complete system
-%% failure (e.g. hardware, O/S, power).
-%%
-%% The default mode, `none`, writes data into operating system buffers which
-%% which will be written to the disks when those buffers are flushed by the
-%% operating system. If the system fails (power loss, crash, etc.) before
-%% before those buffers are flushed to stable storage that data is lost.
-%%
-%% This is prevented by the setting `o_sync` which forces the operating system
-%% to flush to stable storage at every write. The effect of flushing each
-%% write is better durability, however write throughput will suffer as each
-%% write will have to wait for the write to complete.
-%%
-%% ___Available Sync Strategies___
-%%
-%% * `none` - (default) Lets the operating system manage syncing writes.
-%% * `o_sync` - Uses the O_SYNC flag which forces syncs on every write.
-%% * `interval` - Riak will force Bitcask to sync every `bitcask.sync_interval` seconds.
-{mapping, "multi_backend.$name.bitcask.sync_strategy", "riak_kv.multi_backend", [
+%% @see bitcask.sync.strategy
+{mapping, "multi_backend.$name.bitcask.sync.strategy", "riak_kv.multi_backend", [
{default, none},
{datatype, {enum, [none, o_sync, interval]}},
{level, advanced}
]}.
-{mapping, "multi_backend.$name.bitcask.sync_interval", "riak_kv.multi_backend", [
+%% @see bitcask.sync.strategy
+{mapping, "multi_backend.$name.bitcask.sync.interval", "riak_kv.multi_backend", [
{datatype, {duration, s}},
{level, advanced}
]}.
-%% @doc The `max_file_size` setting describes the maximum permitted size for any
-%% single data file in the Bitcask directory. If a write causes the current
-%% file to exceed this size threshold then that file is closed, and a new file
-%% is opened for writes.
+%% @see bitcask.max_file_size
{mapping, "multi_backend.$name.bitcask.max_file_size", "riak_kv.multi_backend", [
{default, "2GB"},
{datatype, bytesize},
@@ -323,171 +359,102 @@
]}.
-%% @doc The `merge_window` setting lets you specify when during the day merge
-%% operations are allowed to be triggered. Valid options are:
-%%
-%% * `always` (default) No restrictions
-%% * `never` Merge will never be attempted
-%% * `window` Hours during which merging is permitted, where
-%% `bitcask.merge_window.start` and
-%% `bitcask.merge_window.end` are integers between 0 and 23.
-%%
-%% If merging has a significant impact on performance of your cluster, or your
-%% cluster has quiet periods in which little storage activity occurs, you may
-%% want to change this setting from the default.
-{mapping, "multi_backend.$name.bitcask.merge_window", "riak_kv.multi_backend", [
+%% @see bitcask.merge.policy
+{mapping, "multi_backend.$name.bitcask.merge.policy", "riak_kv.multi_backend", [
{default, always},
{datatype, {enum, [always, never, window]}},
{level, advanced}
]}.
-{mapping, "multi_backend.$name.bitcask.merge_window.start", "riak_kv.multi_backend", [
+%% @see bitcask.merge.policy
+{mapping, "multi_backend.$name.bitcask.merge.window.start", "riak_kv.multi_backend", [
{default, 0},
{datatype, integer},
{level, advanced}
]}.
-{mapping, "multi_backend.$name.bitcask.merge_window.end", "riak_kv.multi_backend", [
+%% @see bitcask.merge.policy
+{mapping, "multi_backend.$name.bitcask.merge.window.end", "riak_kv.multi_backend", [
{default, 23},
{datatype, integer},
{level, advanced}
]}.
-%% @doc `frag_merge_trigger` setting describes what ratio of
-%% dead keys to total keys in a file will trigger merging. The value of this
-%% setting is a percentage (0-100). For example, if a data file contains 6
-%% dead keys and 4 live keys, then merge will be triggered at the default
-%% setting. Increasing this value will cause merging to occur less often,
-%% whereas decreasing the value will cause merging to happen more often.
-%%
-%% Default is: `60`
-{mapping, "multi_backend.$name.bitcask.frag_merge_trigger", "riak_kv.multi_backend", [
+%% @see bitcask.merge.triggers.fragmentation
+{mapping, "multi_backend.$name.bitcask.merge.triggers.fragmentation", "riak_kv.multi_backend", [
{datatype, integer},
{level, advanced},
- {default, 60}
+ {default, 60},
+ {validators, ["is_percentage"]}
]}.
-%% @doc `dead_bytes_merge_trigger` setting describes how much
-%% data stored for dead keys in a single file will trigger merging. The
-%% value is in bytes. If a file meets or exceeds the trigger value for dead
-%% bytes, merge will be triggered. Increasing the value will cause merging
-%% to occur less often, whereas decreasing the value will cause merging to
-%% happen more often.
-%%
-%% When either of these constraints are met by any file in the directory,
-%% Bitcask will attempt to merge files.
-%%
-%% Default is: 512mb in bytes
-{mapping, "multi_backend.$name.bitcask.dead_bytes_merge_trigger", "riak_kv.multi_backend", [
+%% @see bitcask.merge.triggers.dead_bytes
+{mapping, "multi_backend.$name.bitcask.merge.triggers.dead_bytes", "riak_kv.multi_backend", [
{datatype, bytesize},
{level, advanced},
{default, "512MB"}
]}.
-%% @doc `frag_threshold` setting describes what ratio of
-%% dead keys to total keys in a file will cause it to be included in the
-%% merge. The value of this setting is a percentage (0-100). For example,
-%% if a data file contains 4 dead keys and 6 live keys, it will be included
-%% in the merge at the default ratio. Increasing the value will cause fewer
-%% files to be merged, decreasing the value will cause more files to be
-%% merged.
-%%
-%% Default is: `40`
-{mapping, "multi_backend.$name.bitcask.frag_threshold", "riak_kv.multi_backend", [
+%% @see bitcask.merge.thresholds.fragmentation
+{mapping, "multi_backend.$name.bitcask.thresholds.fragmentation", "riak_kv.multi_backend", [
{datatype, integer},
{level, advanced},
- {default, 40}
+ {default, 40},
+ {validators, ["is_percentage"]}
]}.
-%% @doc `dead_bytes_threshold` setting describes the minimum
-%% amount of data occupied by dead keys in a file to cause it to be included
-%% in the merge. Increasing the value will cause fewer files to be merged,
-%% decreasing the value will cause more files to be merged.
-%%
-%% Default is: 128mb in bytes
-{mapping, "multi_backend.$name.bitcask.dead_bytes_threshold", "riak_kv.multi_backend", [
+%% @see bitcask.merge.thresholds.dead_bytes
+{mapping, "multi_backend.$name.bitcask.thresholds.dead_bytes", "riak_kv.multi_backend", [
{datatype, bytesize},
{level, advanced},
{default, "128MB"}
]}.
-%% @doc `small_file_threshold` setting describes the minimum
-%% size a file must have to be _excluded_ from the merge. Files smaller
-%% than the threshold will be included. Increasing the value will cause
-%% _more_ files to be merged, decreasing the value will cause _fewer_ files
-%% to be merged.
-%%
-%% Default is: 10mb in bytes
-{mapping, "multi_backend.$name.bitcask.small_file_threshold", "riak_kv.multi_backend", [
+%% @see bitcask.merge.thresholds.small_file
+{mapping, "multi_backend.$name.bitcask.thresholds.small_file", "riak_kv.multi_backend", [
{datatype, bytesize},
{level, advanced},
{default, "10MB"}
]}.
-%% @doc Fold keys thresholds will reuse the keydir if another fold was started less
-%% than `max_fold_age` ago and there were less than `max_fold_puts` updates.
-%% Otherwise it will wait until all current fold keys complete and then start.
-%% Set either option to -1 to disable.
-%% Age in micro seconds (-1 means "unlimited")
-{mapping, "multi_backend.$name.bitcask.max_fold_age", "riak_kv.multi_backend", [
- {datatype, integer},
+%% @see bitcask.fold.max_age
+{mapping, "multi_backend.$name.bitcask.fold.max_age", "riak_kv.multi_backend", [
+ {datatype, [{atom, unlimited}, {duration, ms}]},
{level, advanced},
- {default, -1}
+ {default, unlimited}
]}.
-{mapping, "multi_backend.$name.bitcask.max_fold_puts", "riak_kv.multi_backend", [
- {datatype, integer},
+%% @see bitcask.fold.max_age
+{mapping, "multi_backend.$name.bitcask.fold.max_puts", "riak_kv.multi_backend", [
+ {datatype, [integer, {atom, unlimited}]},
{level, advanced},
{default, 0}
]}.
-%% @doc By default, Bitcask keeps all of your data around. If your data has
-%% limited time-value, or if for space reasons you need to purge data, you can
-%% set the `expiry_secs` option. If you needed to purge data automatically
-%% after 1 day, set the value to `1d`.
-%%
-%% Default is: `-1` which disables automatic expiration
+%% @see bitcask.expiry
{mapping, "multi_backend.$name.bitcask.expiry", "riak_kv.multi_backend", [
- {datatype, {duration, s}},
+ {datatype, [{atom, off}, {duration, s}]},
{level, advanced},
{default, -1}
]}.
-%% @doc Require the CRC to be present at the end of hintfiles.
-%% Bitcask defaults to a backward compatible mode where
-%% old hint files will still be accepted without them.
-%% It is safe to set this true for new deployments and will
-%% become the default setting in a future release.
-{mapping, "multi_backend.$name.bitcask.require_hint_crc", "riak_kv.multi_backend", [
- {default, true},
- {datatype, {enum, [true, false]}},
+%% @see bitcask.hintfile_checksums
+{mapping, "multi_backend.$name.bitcask.hintfile_checksums", "riak_kv.multi_backend", [
+ {default, strict},
+ {datatype, {enum, [strict, allow_missing]}},
{level, advanced}
]}.
-%% By default, Bitcask will trigger a merge whenever a data file contains
-%% an expired key. This may result in excessive merging under some usage
-%% patterns. To prevent this you can set the `expiry_grace_time` option.
-%% Bitcask will defer triggering a merge solely for key expiry by the
-%% configured number of seconds. Setting this to `1h` effectively limits
-%% each cask to merging for expiry once per hour.
-%%
-%% Default is: `0`
-{mapping, "multi_backend.$name.bitcask.expiry_grace_time", "riak_kv.multi_backend", [
+%% @see bitcask.expiry.grace_time
+{mapping, "multi_backend.$name.bitcask.expiry.grace_time", "riak_kv.multi_backend", [
{datatype, {duration, s}},
{level, advanced},
{default, 0}
]}.
-%% @doc Configure how Bitcask writes data to disk.
-%% erlang: Erlang's built-in file API
-%% nif: Direct calls to the POSIX C API
-%%
-%% The NIF mode provides higher throughput for certain
-%% workloads, but has the potential to negatively impact
-%% the Erlang VM, leading to higher worst-case latencies
-%% and possible throughput collapse.
+%% @see bitcask.io_mode
{mapping, "multi_backend.$name.bitcask.io_mode", "riak_kv.multi_backend", [
{default, erlang},
{datatype, {enum, [erlang, nif]}},
View
39 test/bitcask_schema_tests.erl
@@ -6,6 +6,7 @@
%% basic schema test will check to make sure that all defaults from the schema
%% make it into the generated app.config
basic_schema_test() ->
+ lager:start(),
%% The defaults are defined in ../priv/bitcask.schema. it is the file under test.
Config = cuttlefish_unit:generate_templated_config("../priv/bitcask.schema", [], context()),
@@ -31,10 +32,11 @@ basic_schema_test() ->
ok.
merge_window_test() ->
+ lager:start(),
Conf = [
- {["bitcask", "merge_window"], window},
- {["bitcask", "merge_window", "start"], 0},
- {["bitcask", "merge_window", "end"], 12}
+ {["bitcask", "merge", "policy"], window},
+ {["bitcask", "merge", "window", "start"], 0},
+ {["bitcask", "merge", "window", "end"], 12}
],
%% The defaults are defined in ../priv/bitcask.schema. it is the file under test.
@@ -62,27 +64,28 @@ merge_window_test() ->
ok.
override_schema_test() ->
+ lager:start(),
%% Conf represents the riak.conf file that would be read in by cuttlefish.
%% this proplists is what would be output by the conf_parse module
Conf = [
{["bitcask", "data_root"], "/absolute/data/bitcask"},
{["bitcask", "open_timeout"], 2},
- {["bitcask", "sync_strategy"], interval},
- {["bitcask", "sync_interval"], "10s"},
+ {["bitcask", "sync", "strategy"], interval},
+ {["bitcask", "sync", "interval"], "10s"},
{["bitcask", "max_file_size"], "4GB"},
- {["bitcask", "merge_window"], never},
- {["bitcask", "merge_window", "start"], 0},
- {["bitcask", "merge_window", "end"], 12},
- {["bitcask", "frag_merge_trigger"], 20},
- {["bitcask", "dead_bytes_merge_trigger"], "256MB"},
- {["bitcask", "frag_threshold"], 10},
- {["bitcask", "dead_bytes_threshold"], "64MB"},
- {["bitcask", "small_file_threshold"], "5MB"},
- {["bitcask", "max_fold_age"], 12},
- {["bitcask", "max_fold_puts"], 7},
+ {["bitcask", "merge", "policy"], never},
+ {["bitcask", "merge", "window", "start"], 0},
+ {["bitcask", "merge", "window", "end"], 12},
+ {["bitcask", "merge", "triggers", "fragmentation"], 20},
+ {["bitcask", "merge", "triggers", "dead_bytes"], "256MB"},
+ {["bitcask", "merge", "thresholds", "fragmentation"], 10},
+ {["bitcask", "merge", "thresholds", "dead_bytes"], "64MB"},
+ {["bitcask", "merge", "thresholds", "small_file"], "5MB"},
+ {["bitcask", "fold", "max_age"], "12ms"},
+ {["bitcask", "fold", "max_puts"], 7},
{["bitcask", "expiry"], "20s" },
- {["bitcask", "require_hint_crc"], false },
- {["bitcask", "expiry_grace_time"], "15s" },
+ {["bitcask", "hintfile_checksums"], "allow_missing"},
+ {["bitcask", "expiry", "grace_time"], "15s" },
{["bitcask", "io_mode"], nif}
],
@@ -99,7 +102,7 @@ override_schema_test() ->
cuttlefish_unit:assert_config(Config, "bitcask.frag_threshold", 10),
cuttlefish_unit:assert_config(Config, "bitcask.dead_bytes_threshold", 67108864),
cuttlefish_unit:assert_config(Config, "bitcask.small_file_threshold", 5242880),
- cuttlefish_unit:assert_config(Config, "bitcask.max_fold_age", 12),
+ cuttlefish_unit:assert_config(Config, "bitcask.max_fold_age", 12000),
cuttlefish_unit:assert_config(Config, "bitcask.max_fold_puts", 7),
cuttlefish_unit:assert_config(Config, "bitcask.expiry_secs", 20),
cuttlefish_unit:assert_config(Config, "bitcask.require_hint_crc", false),
Something went wrong with that request. Please try again.