Skip to content
Browse files

Add implementation notes about searching

Add notes about searching especially in regards to coverage and
  • Loading branch information...
1 parent 3f2cdaf commit 950f2d68dd05ae32f4c0ce6206a037a465d202b5 @rzezeski committed Jul 12, 2012
Showing with 102 additions and 0 deletions.
  1. +102 −0
@@ -5,6 +5,108 @@ Notes on the implementation _before_ it is implemented. Think of it
something like [readme driven development] [rdd].
+Solr already provides distributed search. However, it is up to the
+client, in this case Yokozuna, which _shards_ to run the query
+against. The caller specifies the shards and Solr handles the
+The shards should be mutually exclusive if you want the results to be
+correct. If the same doc id appears in the rows returned then Solr
+will remove all but one instance. Which instances Solr removes is
+non-deterministic. In the case where the duplicates aren't in the
+rows returned and the total rows matching is greater than those
+returned then `numCount` may be incorrect.
+This poses a problem for Yokozuna since it replicates documents. A
+document spans multiple shards thus neighboring shards will have
+overlapping document sets. Depending on the number of partitions
+(also referred to as _ring size_) and number of nodes it may be
+possible to pick a set of shards which contain the entire set of
+documents with no overlap. In most cases, however, overlap cannot be
+The presence of overlap means that Yokozuna can't simply query a set
+of shards. The overlapping could cause `numCount` to be wildly off.
+Yokozuna could use a Solr Core per index/partition combination but
+this could cause an explosion in the number of Core instances. Also,
+more core instances means more file descriptor usage and less chance
+for Solr to optimize Core usage. A better approach is to filter the
+Riak Core contains code to plan and execute _coverage_ queries. The
+idea is to calculate a set of partitions which when combined covers
+the entire set of data. The list of unique nodes, or shards, and the
+list of partitions can be obtained from coverage. The question is how
+to filter the data in Solr using the partitions generated by the
+coverage plan?
+At write time Yokozuna sends the document to `N` different partitions.
+Each partition does a write to it's local Solr Core instance. A Solr
+_document_ is a set of field-value pairs. Yokozuna can leverage this
+fact by adding a partition number field (`_pn`) during the local
+write. A document will be replicated `N` times but each replica will
+contain a different `_pn` value based on it's owning partition. That
+takes care of the first half of the problem, getting the partition
+data in Solr. Next it must be filtered on.
+The most obvious way to filter on _pn is append to the user query.
+For example, if the user query is `text:banana` then Yokozuna would
+transform it to something like `text:banana AND (_pn:<pn1> OR
+_pn:<pn2> ... OR _pn:<pnI>)`. The new query will only accept
+documents that have been stored by the specified partitions. This
+works but a more efficient, and perhaps elegant, method is to use
+Solr's _filter query_ mechanism.
+Solr's filter query is like a regular query but it does not affect
+scoring and it's results are cached. Since a partition can contain
+many documents caching may sound scary. However the cache value is a
+`BitDocSet` which uses a single bit for each document. That means a
+megabyte of memory can cache over 8 million documents. The resulting
+query generated by Yokozuna then looks like the following.
+ q=text:banana&fq=_pn:P2 OR _pn:P5 ... OR _pn:P65
+It may seem like this is the final solution but there is still one
+last problem. Earlier I said that the covering set of partitions
+accounts for all the data. This is true, but in most cases it
+accounts for a little bit more than all the data. Depending on the
+number of partitions (`Q`) and the number of replicas (`N`) there may
+be no possible way to select a set of partitions that covers _exactly_
+the total set of data. To be precise, if `N` does not evenly divide
+into `Q` then the number of overlapping partitions is `L = N - (Q rem
+N)`. For the defaults of `Q=64` and `N=3` this means `L = 3 - (64 rem
+3)` or `L=2`.
+To guarantee that only the total set of unique documents is returned
+the overlapping partitions must be filtered out. To do this Yokozuna
+takes the original set of partitions and performs a series of
+transformations ending with the same list of partitions but with
+filtering data attached to each. Each partition will have either the
+value `any` or a list of partitions paired with it. The value
+indicates which of it's replicas to include based on the first
+partition that owns it. The value `any` means to include a replica no
+matter which partition is the first to own it. Otherwise the
+replica's first owner must be one of the partitions in the include
+In order to perform this additional filter the first partition number
+must be stored as a field in the document. This is the purpose of the
+`_fpn` field. Using the final list of partitions, with the filtering
+data now added, each `{P, all}` pair can be added as a simple `_pn:P`
+to the filter query. However, a `{P, IFPs}` pair must restrain on the
+`_fpn` field as well. The P and IFPs must be applied together. If
+you don't constrain the IFPs to only apply to P then they will apply
+to the entire query and only a subset of the total data will be
+returned. Thus a `{P, [IFP1, IFP2]}` pair will be converted to
+`(_pn:P AND (_fpn:IFP1 OR _fpn:IFP2))`. The final query, achieving
+100% accuracy, will look something like the following.
+ q=text:banana&fq=_pn:P2 OR _pn:P5 ... OR (_pn:P60 AND (_fpn:60)) OR _pn:63
Index Mapping & Cores

0 comments on commit 950f2d6

Please sign in to comment.
Something went wrong with that request. Please try again.