Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce search context - point in time view of indices #56480

Closed
wants to merge 98 commits into from
Closed
Show file tree
Hide file tree
Changes from 89 commits
Commits
Show all changes
98 commits
Select commit Hold shift + click to select a range
3ce9a3b
Cut over from SearchContext to ReaderContext (#51282)
dnhatn Jan 27, 2020
f797bec
Merge branch 'master' into feature/reader-context
dnhatn Jan 29, 2020
d2c651e
Merge branch 'master' into feature/reader-context
dnhatn Feb 5, 2020
e6a5bec
Revert "Cut over from SearchContext to ReaderContext (#51282)"
dnhatn Feb 16, 2020
a45ca7b
Merge branch 'master' into feature/reader-context
dnhatn Feb 16, 2020
ea528f1
Merge branch 'master' into feature/reader-context
dnhatn Feb 19, 2020
8b710d1
Merge branch 'master' into feature/reader-context
dnhatn Feb 21, 2020
fb7386a
Cut over from SearchContext to ReaderContext (#51282)
dnhatn Jan 27, 2020
222d5f3
Merge branch 'master' into feature/reader-context
dnhatn Feb 22, 2020
aade8b6
Merge branch 'master' into feature/reader-context
dnhatn Feb 26, 2020
cdbfb67
Merge branch 'master' into feature/reader-context
dnhatn Feb 28, 2020
f921838
Merge branch 'master' into feature/reader-context
dnhatn Mar 2, 2020
206381e
Move states of search to coordinating node (#52741)
dnhatn Mar 3, 2020
6805c5e
Merge branch 'master' into feature/reader-context
dnhatn Mar 13, 2020
395e2a9
Adjust SearchService after merging from master
dnhatn Mar 13, 2020
b4ffd2e
Merge branch 'master' into feature/reader-context
dnhatn Mar 23, 2020
2bcee88
Merge branch 'master' into feature/reader-context
dnhatn Mar 23, 2020
8913369
Allow searches with specific reader contexts (#53989)
dnhatn Mar 26, 2020
70ce5e5
Merge branch 'master' into feature/reader-context
jimczi Mar 26, 2020
5430ab2
fix checkstyle after backport
jimczi Mar 26, 2020
4713339
Merge branch 'master' into feature/reader-context
dnhatn Apr 8, 2020
3edbda9
Merge branch 'master' into feature/reader-context
dnhatn Apr 12, 2020
3b0760b
Adjust hlrc tests after merge
dnhatn Apr 12, 2020
ff4689b
enable bwc
dnhatn Apr 13, 2020
defa2e7
Restore missing rewrite when create search context
dnhatn Apr 14, 2020
1b538c6
Merge branch 'master' into feature/reader-context
dnhatn Apr 14, 2020
437094b
Merge branch 'master' into feature/reader-context
dnhatn Apr 14, 2020
07f5000
Merge branch 'master' into feature/reader-context
dnhatn Apr 15, 2020
c8b0ccb
Merge branch 'master' into feature/reader-context
dnhatn Apr 15, 2020
be30138
Merge branch 'master' into feature/reader-context
dnhatn Apr 17, 2020
ac2e9ac
Merge branch 'master' into feature/reader-context
dnhatn Apr 20, 2020
aa14abe
Add open reader contexts API (#55265)
dnhatn Apr 21, 2020
f879ed5
Merge branch 'master' into feature/reader-context
dnhatn Apr 21, 2020
15f18b1
Merge branch 'master' into feature/reader-context
dnhatn Apr 26, 2020
51f9542
Adds the ability to acquire readers in IndexShard (#54966)
jimczi Apr 27, 2020
5b3a41b
Mark the reader context as used (#55854)
jimczi Apr 28, 2020
d3fc71e
Merge branch 'master' into feature/reader-context
dnhatn Apr 29, 2020
728db34
Merge branch 'master' into feature/reader-context
dnhatn Apr 29, 2020
8b9878b
Merge branch 'master' into feature/reader-context
dnhatn May 1, 2020
3be26ed
Enable can match for search with reader contexts (#56032)
dnhatn May 4, 2020
1ff72dc
Merge branch 'master' into feature/reader-context
dnhatn May 5, 2020
fb3cf40
Merge branch 'master' into feature/reader-context
dnhatn May 6, 2020
00c5ddc
Merge branch 'master' into feature/reader-context
dnhatn May 7, 2020
c2231fe
Move SearchWithReaderContextIT to internalClusterTest
dnhatn May 7, 2020
c6dac42
Rename reader context to search context (#56351)
dnhatn May 8, 2020
9ceef08
Merge branch 'master' into feature/reader-context
dnhatn May 8, 2020
501c677
Merge branch 'master' into feature/reader-context
dnhatn May 8, 2020
a85c6cf
Merge branch 'master' into feature/reader-context
dnhatn May 8, 2020
de53080
Add total hits info to test assertion
dnhatn May 8, 2020
1c36fef
more on renaming
dnhatn May 8, 2020
3d0b127
Merge branch 'master' into feature/reader-context
dnhatn May 9, 2020
13ef912
fix security listener
dnhatn May 9, 2020
6bc453f
simplify validation of reader context
dnhatn May 9, 2020
63a3595
more on renaming
dnhatn May 9, 2020
454f7fb
bwc: remove scroll on failed authorization
dnhatn May 9, 2020
5971c97
Merge branch 'master' into feature/reader-context
dnhatn May 10, 2020
a60ea8d
Merge branch 'master' into feature/reader-context
dnhatn May 12, 2020
085c4ea
Remove reader context if index gets index midway
dnhatn May 12, 2020
23e8194
Merge branch 'master' into feature/reader-context
dnhatn May 13, 2020
aa8ed55
Merge branch 'master' into feature/reader-context
dnhatn May 14, 2020
8572a05
Merge branch 'master' into feature/reader-context
dnhatn May 25, 2020
852f45d
Merge branch 'master' into feature/reader-context
dnhatn May 26, 2020
d654b26
remove unused code
jimczi May 27, 2020
ce60174
Merge branch 'master' into feature/reader-context
dnhatn May 29, 2020
2d9a281
Merge branch 'master' into feature/reader-context
dnhatn Jun 4, 2020
0d0d647
Use IndexShard from reader context (#57384)
dnhatn Jun 4, 2020
3f0fc80
Merge branch 'master' into feature/reader-context
dnhatn Jun 16, 2020
3ae4ff3
Merge branch 'master' into feature/reader-context
dnhatn Jun 17, 2020
f68dc64
Update docs/reference/search/search_context.asciidoc
jimczi Jun 26, 2020
699c4d4
Update docs/reference/search/search_context.asciidoc
jimczi Jun 26, 2020
0bc08e1
address review
jimczi Jun 26, 2020
2027856
Merge branch 'master' into reader-context
jimczi Jun 26, 2020
7b765d2
ensure that we remove the reader context on failures
jimczi Jun 26, 2020
5be41e0
Merge branch 'master' into feature/reader-context
dnhatn Jun 28, 2020
e61bd20
remove reader from the active list during put
dnhatn Jun 29, 2020
44aca91
Ensure open before acquire searcher
dnhatn Jun 29, 2020
e365093
check wrapper once in search supplier
dnhatn Jun 29, 2020
969ddf4
fix test
dnhatn Jun 29, 2020
ae7f1a4
add doc for can match
dnhatn Jun 29, 2020
b3fe074
can_match constant
dnhatn Jun 29, 2020
05dd518
stop execution on query rewrite failures
jimczi Jun 29, 2020
1c8dd51
Merge branch 'master' into feature/reader-context
dnhatn Jul 1, 2020
aeeee70
add javadocs for RescoreDocIds
dnhatn Jul 1, 2020
96fe51f
add AliasFilter in the search context id and rename SearchContextId i…
jimczi Jul 2, 2020
29e304a
fix doc test
dnhatn Jul 2, 2020
1d9bb74
Add response sample for close api
dnhatn Jul 3, 2020
5290201
support wildcard
dnhatn Jul 3, 2020
21da46f
Merge branch 'master' into reader-context
jimczi Jul 3, 2020
92131bd
Merge branch 'master' into feature/reader-context
dnhatn Jul 3, 2020
4a38f0d
Fix AliasFilter serialize
dnhatn Jul 4, 2020
c77d3ca
fix docs
dnhatn Jul 4, 2020
e2a7fef
explain refresh in search context
dnhatn Jul 5, 2020
c2b02cd
Merge branch 'master' into feature/reader-context
dnhatn Jul 7, 2020
ada43b3
rewording search context
dnhatn Jul 7, 2020
2f867dc
apply doc suggestion
dnhatn Jul 20, 2020
c155098
Merge branch 'master' into feature/reader-context
dnhatn Jul 20, 2020
840ca7c
bump bwc version
dnhatn Jul 22, 2020
e2e5edc
Merge branch 'master' into feature/reader-context
dnhatn Jul 23, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,6 @@ protected void doExecute(Task task, SearchRequest request, ActionListener<Search
InternalAggregations.EMPTY,
new Suggest(Collections.emptyList()),
new SearchProfileShardResults(Collections.emptyMap()), false, false, 1),
"", 1, 1, 0, 0, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY));
"", 1, 1, 0, 0, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY, null));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ public void testInfo() throws IOException {
public void testSearchScroll() throws IOException {
SearchResponse mockSearchResponse = new SearchResponse(new SearchResponseSections(SearchHits.empty(), InternalAggregations.EMPTY,
null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, ShardSearchFailure.EMPTY_ARRAY,
SearchResponse.Clusters.EMPTY);
SearchResponse.Clusters.EMPTY, null);
mockResponse(mockSearchResponse);
SearchResponse searchResponse = restHighLevelClient.scroll(
new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)), RequestOptions.DEFAULT);
Expand Down Expand Up @@ -810,7 +810,9 @@ public void testApiNamingConventions() throws Exception {
"scripts_painless_execute",
"indices.simulate_template",
"indices.resolve_index",
"indices.add_block"
"indices.add_block",
"open_search_context",
"close_search_context",
};
//These API are not required for high-level client feature completeness
String[] notRequiredApi = new String[] {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ protected org.elasticsearch.xpack.core.search.action.AsyncSearchResponse createS
// add search response, minimal object is okay since the full randomization of parsing is tested in SearchResponseTests
SearchResponse searchResponse = randomBoolean() ? null
: new SearchResponse(InternalSearchResponse.empty(), randomAlphaOfLength(10), 1, 1, 0, randomIntBetween(0, 10000),
ShardSearchFailure.EMPTY_ARRAY, Clusters.EMPTY);
ShardSearchFailure.EMPTY_ARRAY, Clusters.EMPTY, null);
org.elasticsearch.xpack.core.search.action.AsyncSearchResponse testResponse =
new org.elasticsearch.xpack.core.search.action.AsyncSearchResponse(id, searchResponse, error, isPartial, isRunning,
startTimeMillis, expirationTimeMillis);
Expand Down
111 changes: 111 additions & 0 deletions docs/reference/search/search_context.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
[[search-context]]
==== Search Context

By default, a search request executes against the latest point in time readers of the
target indices. Sometimes it's more preferred to execute multiple search requests using
the same point in time readers. For example, the combined result of the initial and
subsequent search_after requests would be more consistent if they use the same point
in time readers. This can be done by using a single search context for all search requests.
dnhatn marked this conversation as resolved.
Show resolved Hide resolved

A search context must be opened in a separate step before being used in subsequent
search requests. The keep_alive parameter tells Elasticsearch how long it should keep
the seach context alive, e.g. `?keep_alive=5m`.

[source,console]
--------------------------------------------------
POST /twitter/_search_context?keep_alive=1m
--------------------------------------------------
// TEST[setup:twitter]

The result from the above request includes a `id`, which should
be passed to the `id` of the `search_context` parameter of a search request.

[source,console]
--------------------------------------------------
POST /_search <1>
{
"size": 100,
"query": {
"match" : {
"title" : "elasticsearch"
}
},
"search_context": {
"id": "y_auAwMDaWR4BXV1aWQxAgEJY2x1c3Rlcl94Bm5vZGVfMQFhAAAAAAAAAAEDaWR5BXV1aWQyKgEJY2x1c3Rlcl95Bm5vZGVfMgFiAAAAAAAAAAwDaWR5BXV1aWQyKwAGbm9kZV8zAWMAAAAAAAAAKgEFdXVpZDIAAA==", <2>
"keep_alive": "1m" <3>
}
}
jimczi marked this conversation as resolved.
Show resolved Hide resolved
--------------------------------------------------
// TEST[catch:missing]

<1> A search request with `search_context` must not specify `index`, `routing`,
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
and `preferences` as these parameters are copied from the `search_context`.
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
<2> The `id` parameter tells Elasticsearch to execute the request using
the point-in-time readers from this search context id.
<3> The `keep_alive` parameter tells Elasticsearch how long it should extend
the time to live of the search context

IMPORTANT: The open search context request and each subsequent search request can
return different `id`; thus always use the most recently received `id` for the
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
next search request.

[[search-context-keep-alive]]
===== Keeping the search context alive
The `keep_alive` parameter, which is passed to a open search context request and
search request, extends the time to live of the search context. The value
(e.g. `1m`, see <<time-units>>) does not need to be long enough to
process all data -- it just needs to be long enough for the next request.

Normally, the background merge process optimizes the index by merging together
smaller segments to create new, bigger segments. Once the smaller segments are
no longer needed they are deleted. However, open search contexts prevents the
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
old segments from being deleted since they are still in use.

TIP: Keeping older segments alive means that more disk space and file handles
are needed. Ensure that you have configured your nodes to have ample free file
handles. See <<file-descriptors>>.

Additionally, if a segment contains deleted or updated documents then the search
context must keep track of whether each document in the segment was live at the
time of the initial search request. Ensure that your nodes have sufficient heap
space if you have many open search contexts on an index that is subject to ongoing
deletes or updates.

You can check how many search contexts are open with the
<<cluster-nodes-stats,nodes stats API>>:

[source,console]
---------------------------------------
GET /_nodes/stats/indices/search
---------------------------------------

===== Close search context API

Search contexts are automatically closed when the `keep_alive` has
been elapsed. However keeping search contexts has a cost, as discussed in the
<<search-context-keep-alive,previous section>>. Search contexts should be closed
as soon as they are no longer used in search requests.

[source,console]
---------------------------------------
DELETE /_search_context
{
"id" : "x9mtAwMDaWR4BXV1aWQxAgAGbm9kZV8xAWEAAAAAAAAAAQNpZHkFdXVpZDIqAAZub2RlXzIBYgAAAAAAAAAMA2lkeQV1dWlkMisABm5vZGVfMwFjAAAAAAAAACoBBXV1aWQyAAA="
}
---------------------------------------
// TEST[catch:missing]
dnhatn marked this conversation as resolved.
Show resolved Hide resolved

The API returns the following response:

[source,console-result]
--------------------------------------------------
{
"succeeded": true, <1>
"num_freed": 3 <2>
}
--------------------------------------------------
// TESTRESPONSE[s/"succeeded": true/"succeeded": $body.succeeded/]
// TESTRESPONSE[s/"num_freed": 3/"num_freed": $body.num_freed/]

<1> If true, all search contexts associated with the given id are successfully closed
<2> The number of search contexts have been successfully closed
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ protected MultiSearchTemplateResponse createTestInstance() {
SearchResponse.Clusters clusters = randomClusters();
SearchTemplateResponse searchTemplateResponse = new SearchTemplateResponse();
SearchResponse searchResponse = new SearchResponse(internalSearchResponse, null, totalShards,
successfulShards, skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, clusters);
successfulShards, skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, clusters, null);
searchTemplateResponse.setResponse(searchResponse);
items[i] = new MultiSearchTemplateResponse.Item(searchTemplateResponse, null);
}
Expand Down Expand Up @@ -82,7 +82,7 @@ private static MultiSearchTemplateResponse createTestInstanceWithFailures() {
SearchResponse.Clusters clusters = randomClusters();
SearchTemplateResponse searchTemplateResponse = new SearchTemplateResponse();
SearchResponse searchResponse = new SearchResponse(internalSearchResponse, null, totalShards,
successfulShards, skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, clusters);
successfulShards, skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, clusters, null);
searchTemplateResponse.setResponse(searchResponse);
items[i] = new MultiSearchTemplateResponse.Item(searchTemplateResponse, null);
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ private static SearchResponse createSearchResponse() {
InternalSearchResponse internalSearchResponse = InternalSearchResponse.empty();

return new SearchResponse(internalSearchResponse, null, totalShards, successfulShards,
skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY);
skippedShards, tookInMillis, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY, null);
}

private static BytesReference createSource() {
Expand Down Expand Up @@ -171,7 +171,7 @@ public void testSearchResponseToXContent() throws IOException {
InternalSearchResponse internalSearchResponse = new InternalSearchResponse(
new SearchHits(hits, new TotalHits(100, TotalHits.Relation.EQUAL_TO), 1.5f), null, null, null, false, null, 1);
SearchResponse searchResponse = new SearchResponse(internalSearchResponse, null,
0, 0, 0, 0, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY);
0, 0, 0, 0, ShardSearchFailure.EMPTY_ARRAY, SearchResponse.Clusters.EMPTY, null);

SearchTemplateResponse response = new SearchTemplateResponse();
response.setResponse(searchResponse);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -158,12 +158,8 @@ public TopDocsAndMaxScore[] topDocs(SearchHit[] hits) throws IOException {
topDocsCollector = TopScoreDocCollector.create(topN, Integer.MAX_VALUE);
maxScoreCollector = new MaxScoreCollector();
}
try {
for (LeafReaderContext ctx : context.searcher().getIndexReader().leaves()) {
intersect(weight, innerHitQueryWeight, MultiCollector.wrap(topDocsCollector, maxScoreCollector), ctx);
}
} finally {
clearReleasables(Lifetime.COLLECTION);
for (LeafReaderContext ctx : context.searcher().getIndexReader().leaves()) {
intersect(weight, innerHitQueryWeight, MultiCollector.wrap(topDocsCollector, maxScoreCollector), ctx);
}
TopDocs topDocs = topDocsCollector.topDocs(from(), size());
float maxScore = Float.NaN;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ protected RequestWrapper<?> buildRequest(Hit doc) {
new TotalHits(0, TotalHits.Relation.EQUAL_TO),0);
InternalSearchResponse internalResponse = new InternalSearchResponse(hits, null, null, null, false, false, 1);
SearchResponse searchResponse = new SearchResponse(internalResponse, scrollId(), 5, 4, 0, randomLong(), null,
SearchResponse.Clusters.EMPTY);
SearchResponse.Clusters.EMPTY, null);

client.lastSearch.get().listener.onResponse(searchResponse);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ private SearchResponse createSearchResponse() {
new TotalHits(0, TotalHits.Relation.EQUAL_TO),0);
InternalSearchResponse internalResponse = new InternalSearchResponse(hits, null, null, null, false, false, 1);
return new SearchResponse(internalResponse, randomSimpleString(random(), 1, 10), 5, 4, 0, randomLong(), null,
SearchResponse.Clusters.EMPTY);
SearchResponse.Clusters.EMPTY, null);
}

private void assertSameHits(List<? extends ScrollableHitSource.Hit> actual, SearchHit[] expected) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ private static MockTransportService startTransport(
InternalSearchResponse response = new InternalSearchResponse(new SearchHits(new SearchHit[0],
new TotalHits(0, TotalHits.Relation.EQUAL_TO), Float.NaN), InternalAggregations.EMPTY, null, null, false, null, 1);
SearchResponse searchResponse = new SearchResponse(response, null, 1, 1, 0, 100, ShardSearchFailure.EMPTY_ARRAY,
SearchResponse.Clusters.EMPTY);
SearchResponse.Clusters.EMPTY, null);
channel.sendResponse(searchResponse);
});
newService.registerRequestHandler(ClusterStateAction.NAME, ThreadPool.Names.SAME, ClusterStateRequest::new,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"close_search_context":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/search-context.html",
"description":"Close a search context"
},
"stability":"beta",
"url":{
"paths":[
{
"path":"/_search_context",
"methods":[
"DELETE"
]
}
]
},
"params":{},
"body":{
"description": "a search context to close"
}
}
}
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
{
"open_search_context":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/search-context.html",
"description":"Open a search context that can be used in subsequent searches"
},
"stability":"beta",
"url":{
"paths":[
{
"path":"/_search_context",
dnhatn marked this conversation as resolved.
Show resolved Hide resolved
"methods":[
"POST"
]
},
{
"path":"/{index}/_search_context",
"methods":[
"POST"
],
"parts":{
"index":{
"type":"list",
"description":"A comma-separated list of index names to open search context; use `_all` or empty string to perform the operation on all indices"
}
}
}
]
},
"params":{
"preference":{
"type":"string",
"description":"Specify the node or shard the operation should be performed on (default: random)"
},
"routing":{
"type":"string",
"description":"Specific routing value"
},
"ignore_unavailable":{
"type":"boolean",
"description":"Whether specified concrete indices should be ignored when unavailable (missing or closed)"
},
"expand_wildcards":{
"type":"enum",
"options":[
"open",
"closed",
"hidden",
"none",
"all"
],
"default":"open",
"description":"Whether to expand wildcard expression to concrete indices that are open, closed or both."
},
"keep_alive": {
"type": "string",
"description": "Specific the time to live for the search context"
}
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
search.max_keep_alive: "1m"

- do:
catch: /.*Keep alive for scroll.*is too large.*/
catch: /.*Keep alive for.*is too large.*/
search:
rest_total_hits_as_int: true
index: test_scroll
Expand All @@ -61,7 +61,7 @@
- length: {hits.hits: 1 }

- do:
catch: /.*Keep alive for scroll.*is too large.*/
catch: /.*Keep alive for.*is too large.*/
scroll:
rest_total_hits_as_int: true
scroll_id: $scroll_id
Expand Down
Loading