New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tombstones are not reaped if reaping occurs before tombstones reach all replicas [JIRA: RIAK-2803] #311
Comments
Reported in zd://1139 |
Also, there is a perhaps related issue going on in the thread started here: http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-May/008389.html which might look more like this issue starting here: http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-May/008423.html it isn't clear to me if they're the same thing. @reiddraper might be able to say with more authority if we're looking at one issue or two related issues. |
https://gist.github.com/2938621 reproduces the issue on my machine on a clean cluster |
Moving to 2.1 milestone. Speak up if there are any objections, please. |
In case it helps anyone, here's a quick ruby script to crawl through and remove the tombstones use at your own risk - you're not supposed to list all keys in production, but for us the alternative was to have 560K tombstones sitting around taking up space when we only needed ~3K active keys. Toss this in a monthly cron job and there you go - also, if you needed the script to use less memory you could alter the first curl request to: Note that the riak_tombstone_cleanup.rb
|
We spent quite a bit of time today discussing this behavior and have decided to roll better reaping functionality into AAE as opposed to relying entirely on the get after put which might not be done propagating(depending on dw/pw settings). Additionally, when AAE is used as a view of the data for scans etc, we'll be smarted about sifting out tombstones. Thanks everyone for their discussion and contributions. Closing this issue. |
Tombstones may not be reaped if reaping occurs before tombstones are written to all replicas.
Scenario
riak_kv/src/riak_kv_delete.erl
Line 80 in cada9c7
riak_kv/src/riak_kv_delete.erl
Line 86 in cada9c7
( delete | read_repair | noop )
riak_kv/src/riak_kv_get_fsm.erl
Line 282 in cada9c7
If one of the replicas returns an older version during tombstone reaping, the riak_kv_get_core check returns
read_repair
rather thandelete
.The riak_kv_get_fsm should be able to delete the object rather than read repairing if one of the replicas returns an object older than the tombstone.
Below is an example set of replicas that will result in a read repair rather than a delete:
The text was updated successfully, but these errors were encountered: