Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport object size/sibling limits to 1.4 #836

Merged
merged 4 commits into from Feb 13, 2014

Conversation

engelsanchez
Copy link
Contributor

  • Objects larger than warn_object_size will log a warning on read or
    write. The default is 5MiB
  • Objects larger than max_object_size will fail to be written, logging
    an error message. By default no limit.
  • Objects with a sibling count larger than warn_siblings will log a
    warning on write. By default 25.
  • Objects with a sibling count larger than max_siblings will fail to
    write, logging an error message. By default no limit.

Note: in 2.0, there will be default values for max size and siblings, but I didn't want to risk failing operations for users in a point release.

* Objects larger than warn_object_size will log a warning on read or
  write. The default is 5MiB
* Objects larger than max_object_size will fail to be written, logging
  an error message. By default no limit.
* Objects with a sibling count larger than warn_siblings will log a
  warning on write. By default 25.
* Objects with a sibling count larger than max_siblings will fail to
  write, logging an error message. By default no limit.
encode_and_put_no_sib_check(Obj, Mod, Bucket, Key, IndexSpecs, ModState)
end.

encode_and_put_no_sib_check(Obj, Mod, Bucket, Key, IndexSpecs, ModState) ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sure handoff goes through this path - enforcing sibling / object size limits should only be done on paths controlled by the application where it could refetch/do something about it. On the handoff path it would lead to data loss (downside is that if not enforced it will still lead to creation of an undesirably large object).

Making the sibling error/warnings a bit different. It used to be the
same with different severify.
Now both write failure messages have the same prefix
This is the same trick that overload uses to return an error code to the
put FSM. Because the FSM expects this from overload, a non-overload
error ends up returning the request id as the error reason, which sucks.
With this change, a local put will return a too large or too many
siblings error to the client.
Local puts will likely fail in this scenario. Now, it's possible for the
local put to succeed and remotes to fail (due to different thresholds or
divergent values with different number of siblings). In that case the
error reason might talk about failure to satisfy pw or related.  We
currently don't have a system for prioritizing different vnode error
responses. Except for overload, which eagerly returns overload if any
vnode returned an overload code.
The code was restructured a bit to avoid losing data during a handoff if
the object has too many siblings or is too large.
So direct puts will fail if too many siblings or are too large, but if
an object is moved by handoff somewhere, it will just generate a
warning.
Jon made me do it.
@jonmeredith
Copy link
Contributor

+1 merge.

r_t's pass, verify_handoff passes - looked with code coverage to see code was hit.

Please don't forget to apply to 2.0.

@engelsanchez
Copy link
Contributor Author

Applying to 2.0 is already in the Kill Bill. No way it can be forgotten, right? right?

engelsanchez added a commit that referenced this pull request Feb 13, 2014
Backport object size/sibling limits to 1.4
@engelsanchez engelsanchez merged commit b700d5b into 1.4 Feb 13, 2014
@engelsanchez engelsanchez deleted the feature/large-object-warning branch February 13, 2014 21:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants