-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bz47 config url #2
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Configuration URL is always "/" Resource list is created from Webmachine dispatch list Link header is always added HTML output is an unordered list of anchor tags JSON output is an object with the resource module name as the key, and [the first element of] its dispatch URL as the value
rebased and merged as |
slfritchie
added a commit
that referenced
this pull request
May 11, 2012
In an ideal world, this module would live in a repo that would be easily sharable across multiple Basho projects. The tricky bit for this would be trying to make generic the `application:get_env(riak_core, dtrace_support)' call that's currently in the `riak_kv_dtrace:dtrace/1' function. But we'll wait for another day, I think. The purpose of this module is to reduce the overhead of DTrace (and SystemTap) probes when those probes are: 1. not supported by the VM, or 2. disabled by application configuration. #1 is the bigger problem: a single call to the code loader can take several milliseconds. #2 is useful in the case that we want to try to reduce the overhead of adding these probes even further by avoiding the NIF call entirely. SLF's MacBook Pro tests without cover, with R14B04 + DTrace: timeit_naive average 2236.306 usec/call over 500.0 calls timeit_mochiglobal average 0.509 usec/call over 225000.0 calls timeit_best OFF (fastest) average 0.051 usec/call over 225000.0 calls timeit_best ON -init average 1.027 usec/call over 225000.0 calls timeit_best ON +init average 0.202 usec/call over 225000.0 calls with cover, with R14B04 + DTrace: timeit_naive average 2286.202 usec/call over 500.0 calls timeit_mochiglobal average 1.255 usec/call over 225000.0 calls timeit_best OFF (fastest) average 1.162 usec/call over 225000.0 calls timeit_best ON -init average 2.207 usec/call over 225000.0 calls timeit_best ON +init average 1.303 usec/call over 225000.0 calls
Merged
jrwest
added a commit
that referenced
this pull request
Jan 28, 2013
* Instead of vnode manager triggering each transfer for each source index it triggers one "copy" transfer each. The copy transfer contains the list of target indexes to "copy" to. The vnode then triggers an outbound ownership_copy one at a time until all transfers for the list of indexes are complete. Once complete, it notifies the vnode manager like reglular handoff. * Added (barely tested) support for forwarding. * This approach more closely resembles typical ownership transfer/hinted handoff for a vnode. The primary differences are: 1) data is not deleted after handoff completes (this needs to be addressed -- at some point some data needs to be deleted, see comments). 2) in the case that an index exists in both old & new rings it may copy its data to target indexes and then keep running. In this case data also needs to be deleted (also punted on) but some data must still remain (referred to as rehash in Core 2.0 doc). 3) the same vnodes that are affected by #2 also differ in that after they begin forwarding they may stop and continue running in their regular state. In addition, when forwarding, these indexes will forward some requests but others will still be handled by the local vnode (not forwarded). What to do with a request during explicit forwarding (when vnode returns {forward, X} during handle_handoff_command) when forwarding that message would result in it being delivered to same vnode still needs to be addressed (see comments). * This commit adds a vnode callback, request_hash, required only if supporting changing ring sizes. We probably need something better than this but its sufficient for a prototype. The function's argument is the request to be handled by the vnode and the return value is the hashed value of the key from the request. This is necessary because the request is opaque to riak_core_vnode. One obvious issue, for example, is in the case of FOLD_REQ there is no key to hash -- even though we probably shouldn't be and in some cases don't forward this anyways.
jtuple
added a commit
that referenced
this pull request
Apr 21, 2013
Active anti-entropy is a process through which multiple replicas periodically sync with each other and automatically repair any keys that are missing or have divergent data. For example, a user could delete all data from a node and Riak would automatically regenerate the data from the other replicas. This implementation uses hash trees / Merkle trees to perform lightweight key exchange, with work proportional to the number of divergent keys rather than the size of the overall keyspace. This implementation meets several design goals: 1. The underlying hash trees are on-disk structures, and can scale to billions+ keys. This is in contrast to in-memory trees that require significant RAM to support massive keyspaces. 2. The underlying hash trees are persistent. Riak nodes can be restarted without fear of hash tree data being lost and needing to be rebuilt. 3. As new data is written to Riak, the hash trees associated with the various partitions are kept up-to-date. Each write in Riak triggers an asynchronous write to one or more hash trees. Combined with #2, this enables trees to be built once through a scan over existing data, and then maintained in real-time. In reality, trees are expired over time and rebuilt to ensure the hash tree and backend data stay in sync, and also to identify bit rot / disk failure. 4. The entire implementation is designed to be non-blocking. For example, a snapshot of a hash tree is generated before performing an exchange with other replicas, therefore allowing concurrent inserts of new key/hash pairs as new writes occur. The current implementation triggers read repair for each key difference identified through the hash exchange. This is a reasonable approach as read repair is stable, production-tested mechanism in Riak. However, the read repair approach leads to slower replica repair in cases where there are a large number of key differences. This is an area for future improvement.
russelldb
pushed a commit
to russelldb/riak_core
that referenced
this pull request
Nov 20, 2018
Core support for vnode soft-limits
martincox
pushed a commit
that referenced
this pull request
Mar 6, 2020
develop-2.2.8 to develop-3.1
This pull request was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposed patch for https://issues.basho.com/show_bug.cgi?id=47