Add functions for clearing bucket properties. #202

wants to merge 2 commits into


None yet
6 participants

jerith commented Jun 28, 2012

This is an implementation of the riak_core functionality required for #197. Additional work is required in riak_kv to expose this to the outside world.

@ghost ghost assigned russelldb Aug 22, 2012

@@ -84,6 +85,17 @@ merge_props(Overriding, Other) ->
lists:ukeymerge(1, lists:ukeysort(1, Overriding),
lists:ukeysort(1, Other)).
+%% @spec clear_bucket(riak_object:bucket()) -> ok

rzezeski Aug 22, 2012


riak_object is not visible in riak_core. I think binary() is good enough if there isn't already a bucket type in core. Also, do you mind using module attributed specs? We are trying to move away from edoc specs.


jerith Aug 23, 2012

When I merge in all the 1.2 work, I'll update my changes to match whatever the surrounding code uses. Would that be sufficient?


rzezeski Sep 4, 2012


I still see riak_object in the spec. It may be used in other places in core but that is a mistake. Please change this to binary().

@ghost ghost assigned rzezeski Aug 22, 2012

jerith commented Aug 23, 2012

This branch really needs to be updated to match the state of the repo. I'll probably have time to do that next week when I'm finished with all my traveling.


rzezeski commented Aug 23, 2012

Sounds good, safe travels.

This pull request passes (merged af318b5 into 86d5148).

jerith commented Aug 28, 2012

No merge conflicts and everything still works fine. Same for basho/riak_kv#357, except the tests fail there because they're being run against the wrong riak_core branch.

@rzezeski: Everything else in that file still uses edoc specs and I ran into various problems switching to compiler specs. I don't know enough about specs to fix them properly, so I'd rather leave it for someone more experienced.

@@ -500,6 +501,24 @@ update_meta(Key, Val, State) ->
+% @doc Clear a key in the cluster metadata dict

rzezeski Sep 4, 2012


Why not just do it all in one case to cut done on number of lines:

case dict:find(...) of
    {ok, _} ->
        VClock = ...,
    error ->

rzezeski Sep 4, 2012


Actually, this isn't going to work because it doesn't play nice with metadata reconciliation. The short story is that you can't just remove the meta entry. Depending on the state of cluster ring reconciliation might make it reappear. I think you can achieve your goal with riak_core_ring:update_meta/3 and using an empty list as value but I would have to run through the code to verify.


jerith Sep 5, 2012

Cool, thanks for the feedback.


rzezeski commented Oct 26, 2012

@jerith I'm terribly sorry I let this linger for so long. I have
verified today that just removing the meta would be a bad idea. The
merge_meta function is a dict:merge which has the
following contract:

All the Key - Value pairs from both dictionaries are included in the
new dictionary.

Thus, without a tombstone value, the old properties would become
resurrected on the next ring reconcile. You can verify this yourself
by adding the following lines to the metadata_inequality_test.

    Ring3 = clear_meta(key,Ring2),
    ?assertEqual(undefined, get_meta(key, Ring3)),
    ?assertEqual(undefined, get_meta(key, ?CHSTATE{meta=merge_meta(Ring2?CHSTATE.meta, Ring3?CHSTATE.meta)})).

A solution that would work is to write a special tombstone value
instead, e.g. '$tombstone$'. Then change functions like
get_bucket to be tombstone aware and return default bucket props
in that case.

Does this make sense? Would you like to try updating your branch to
use the method I suggest? If not I could take a stab at it myself.
Let me know.

jerith commented Oct 26, 2012

No worries, I've been rather busy myself. We've made so changes to our test setup that mitigate the problem this causes, so the situation isn't as bad as it was, but I'd still like to get this resolved.

As far as I can tell, using tombstone values would look like deletion from the outside (which is fine), but wouldn't solve the problem of unbounded metadata growth which leads to dramatic performance hits when operating on bucket properties. We'd have to also add a mechanism for removing tombstoned properties from the metadata after reconciliation has finished. Is this something that's feasible to implement?


rzezeski commented Oct 27, 2012

I believe a similar mechanism to how nodes are removed could be used but I have a feeling it would be a lot of work. I need to consult with @jtuple to be sure. Currently the ring is a dump for many different types of cluster-wide data. We want to add a system that allows to share facts like bucket properties without using the ring. But I'm not sure when work on that might start.

Adding tombstone support would help alleviate growing metadata. It's not perfect, but is an improvement. An atom is much less overhead than a dict. It could act as a decent stop-gap until we implement something better. If you'd like to modify this branch to implement tombstones I will do my best to push it through. If not, I might just start my own branch. What do you think?

@ghost ghost assigned jrwest Oct 31, 2012


jrwest commented Oct 31, 2012

@jerith @rzezeski I'm planning to open up a new PR that has an implementation using the tombstone method. going to close this one out. Ill attach my PR to the original issue: #197

@jrwest jrwest closed this Oct 31, 2012

@ghost ghost assigned slfritchie Nov 13, 2012

@ghost ghost assigned jrwest Nov 13, 2012

snwight pushed a commit to snwight/riak_core that referenced this pull request Dec 25, 2013

logical deletion of ring metadata
* re-implementation of basho#202 using the tombstone method described here: basho#202 (comment) -- the implementation is a bit more general in that it applies to all metadata in the dict not just bucket props (riak_core_ring:get_meta/3 is tombstone aware rather than riak_core_bucket:get_bucket/1)
* tombstones are not currently reaped although it possible to do so in the future
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment