http://localhost:8094/api/index/DemoIndex/query -d '{
-  "query": {
-    "field": "reviews.content",
-    "term": "good"
-  },
-  "facets": {
-    "types": {
-      "size": 10,
-      "field": "reviews.ratings.Service",
-      "numeric_ranges": [
-        {
-          "name": "Awesome",
-          "min": 5
-        },
-        {
-          "name": "Good",
-          "max": 4
-        },
-          {
-          "name": "Avg",
-          "max": 3
-        },
-        {
-          "name": "Poor",
-          "max": 2
-        },
-        {
-          "name": "Bad",
-          "max": 1
-        }
-      ]
-    }
-  }
-}'
-----
-
-=== Curl Response
-
-----
-{
-  "status":{
-    "total":1,
-    "failed":0,
-    "successful":1
-  },
-  "request":{
-    "query":{
-      "term":"good",
-      "field":"reviews.content"
-    },
-    "size":10,
-    "from":0,
-    "highlight":null,
-    "fields":null,
-    "facets":{
-      "types":{
-        "size":10,
-        "field":"reviews.ratings.Service",
-        "numeric_ranges":[
-          {
-            "name":"Awesome",
-            "min":5
-          },
-          {
-            "name":"Good",
-            "max":4
-          },
-          {
-            "name":"Avg",
-            "max":3
-          },
-          {
-            "name":"Poor",
-            "max":2
-          },
-          {"name":"Bad",
-          "max":1
-          }
-        ]
-      }
-    },
-    "explain":false,
-    "sort":["-_score"],
-    "includeLocations":false,
-    "search_after":null,
-    "search_before":null
-  },
-  "hits":[
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_15134",
-      "score":1.608775098615459,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_3491",
-      "score":1.5929246603757872,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_9062",
-      "score":1.3135594084905977,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_25261",
-      "score":1.199110122199631,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_15976",
-      "score":1.0384598347067433,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_26142",
-      "score":1.029912757807367,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_3629",
-      "score":0.9683687809619517,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_5848",
-      "score":0.9479798384018671,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_16443",
-      "score":0.9479797868886458,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    },
-    {
-      "index":"DemoIndex_41b91e3a4134783d_4c1c5584",
-      "id":"hotel_2814",
-      "score":0.9288267057398083,
-      "sort":["_score"],
-      "fields":{"_$c":"hotel"}
-    }
-  ],
-  "total_hits":656,
-  "max_score":1.608775098615459,
-  "took":343585473,
-  "facets":{
-    "types": {
-      "field":"reviews.ratings.Service",
-      "total":1871,
-      "missing":3,
-      "other":0,
-      "numeric_ranges":[
-        {
-          "name":"Good",
-          "max":4,"count":658
-        },
-        {
-          "name":"Awesome",
-          "min":5,
-          "count":579
-        },
-        {
-          "name":"Avg",
-          "max":3,
-          "count":366
-        },
-        {
-          "name":"Poor",
-          "max":2,
-          "count":219
-        },
-        {
-          "name":"Bad",
-          "max":1,
-          "count":49
-        }
-      ]
-    }
-  }
-}
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-custom-filters.adoc b/modules/fts/pages/fts-custom-filters.adoc
deleted file mode 100644
index 8bba8b666a..0000000000
--- a/modules/fts/pages/fts-custom-filters.adoc
+++ /dev/null
@@ -1,145 +0,0 @@
-= Custom Filters
-
-Custom filters can be viewed and modified from the index’s configuration page under the Index Settings section. Any custom filters that are configured for the current index can be viewed by expanding the Custom Filters panel. If no custom filters have been configured for the index, the Custom Filters panel will be empty.
-
-== Add Custom Filter
-
-To add a custom filter to a Full Text Index via the Couchbase Capella UI, the following permissions are required:
-
-You must have the `Project View` privileges for the project that contains the cluster.
-
-You must have a database user associated with your organization's user account. The database user must have Read/Write permissions for the bucket on which the index was created.
-
-The 'Custom Filters' panel shows no existing custom filters.
-
-The following four options are provided:
-
-=== Character Filter
-
-Adds a new character filter to the list of those available.
-The new filter becomes available for inclusion in custom-created analyzers.
-
-Left-click the *+ Add Character Filter*. It displays the *Custom Character Filter* dialog:
-
-[#fts_custom_character_filter_dialog_initial]
-image::fts-custom-character-filter-dialog-initial.png[,380,align=left]
-
-The following interactive fields are provided:
-
-* *Name*: A suitable, user-defined name for the new character filter.
-
-* *Type*: The type of filtering to be performed. Available options can be accessed from the pull-down menu to the right of the field.
-(Currently, only `regexp` is available.)
-
-* *Regular Expression*: The specific _regular expression_ that the new character filter is to apply. Character-strings that match the expression will be affected; others will not.
-
-* *Replacement*: The replacement text that will be substituted for each character-string match returned by the regular expression.
-If no replacement text is specified, the matched character-string will be omitted.
-
-The following completed fields define a character filter for deleting leading whitespace:
-
-[#fts_custom_character_filter_dialog_filled]
-image::fts-custom-character-filter-dialog-filled.png[,380,align=left]
-
-When saved, the new character filter is displayed on its own row, with options for further editing and deleting:
-
-[#fts_custom_filters_panel_new_character_filter]
-image::fts-custom-filters-panel-new-character-filter.png[,700,align=left]
-
-=== Tokenizer
-
-Adds a new tokenizer to the list of those available.
-
-The new tokenizer becomes available for inclusion in custom-created analyzers.
-
-Left-click the *+ Add Tokenizer*. It displays the *Custom Tokenizer* dialog:
-
-[#fts_custom_filters_tokenizer_dialog_initial]
-image::fts-custom-filters-tokenizer-dialog-initial.png[,380,align=left]
-
-The following interactive fields are provided:
-
-* *Name*: A suitable, user-defined name for the new tokenizer.
-
-* *Type*: The process used in tokenizing. Available options can be accessed from the pull-down menu to the right of the field.
-(Currently, `regexp` and `exception` are available.)
-
-* *Regular Expression*: The specific _regular expression_ used by the tokenizing process.
-
-The following completed fields define a tokenizer that removes uppercase characters:
-
-[#fts_custom_filters_tokenizer_dialog_completed]
-image::fts-custom-filters-tokenizer-dialog-completed.png[,380,align=left]
-
-When saved, the new tokenizer is displayed on its own row, with options for further editing and deleting:
-
-[#fts_custom_filters_panel_new_tokenizer]
-image::fts-custom-filters-panel-new-tokenizer.png[,700,align=left]
-
-=== Token filter
-
-Adds a new token filter to the list of those available. The new token filter becomes available for inclusion in custom-created analyzers.
-
-Left-click the *+ Add Token Filter*. It displays the *Custom Token Filter* dialog:
-
-[#fts_custom_filters_token_filter_dialog_initial]
-image::fts-custom-filters-token-filter-dialog-initial.png[,380,align=left]
-
-The following interactive fields are provided:
-
-* *Name*: A suitable, user-defined name for the new token filter.
-
-* *Type*: The type of post-processing to be provided by the new token filter. The default is `length`, which creates tokens whose minimum number of characters is specified by the integer provided in the *Min* field and whose maximum by the integer provided in the *Max*.
-Additional post-processing types can be selected from the pull-down menu at the right of the field:
-+
-[#fts_custom_filters_token_filter_types]
-image::fts-custom-filters-token-filter-types.png[,420,align=left]
-+
-NOTE: The type-selection determines which interactive fields appear in the *Custom Token Filter* dialog, following *Name* and *Type*.
-The pull-down menu displays a list of available types.
-For descriptions, see the section xref:fts-analyzers.adoc#Token-Filters[Token Filters], on the page xref:fts-analyzers.adoc#Understanding-Analyzers[Understanding Analyzers].
-
-* *Min*: The minimum length of the token, in characters.
-Note that this interactive field is displayed for the `length` type, and may not appear, or be replaced, when other types are specified.
-The default value is 3.
-
-* *Max*: The maximum length of the token, in characters.
-Note that this interactive field is displayed for the `length` type and may not appear, or be replaced when other types are specified.
-The default value is 255.
-
-The following completed fields define a token filter that restricts token-length to a minimum of 3, and a maximum of 255 characters:
-
-[#fts_custom_filters_token_filter_dialog_complete]
-image::fts-custom-filters-token-filter-dialog-complete.png[,380,align=left]
-
-When saved, the new token filter is displayed on its own row, with options for further editing and deleting:
-
-[#fts_custom_filters_panel_new_token_filter]
-image::fts-custom-filters-panel-new-token-filter.png[,700,align=left]
-
-=== Wordlist
-
-Adds a list of words to be removed from the current search.
-
-Left-click the *+ Add Word List*. It displays the *Custom Word List* dialog
-
-[#fts_custom_wordlist_dialog_initial]
-image::fts-custom-wordlist-dialog-initial.png[,380,align=left]
-
-To create a custom word list, first, type a suitable name into the *Name* field. Then, add words by typing each individually into the field that bears the placeholder text, `word to be added`.
-
-After each word has been added, left-click on the [.ui]*+ Add* button, on the lower-right. The word is added to the central *Words* panel.
-
-Continue adding as many words as are required.
-
-For example:
-
-[#fts_custom_wordlist_dialog_complete]
-image::fts-custom-wordlist-dialog-complete.png[,380,align=left]
-
-To remove a word, select the word within the *Words* panel and left-click on the *Remove* button.
-
-To save, left-click on [.ui]*Save*. The new word list is displayed on its own row, with options for further editing and deleting:
-
-[#fts_custom_filters_panel_new_word_list]
-image::fts-custom-filters-panel-new-word-list.png[,700,align=left]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-date-time-parsers.adoc b/modules/fts/pages/fts-date-time-parsers.adoc
deleted file mode 100644
index 3553bd5904..0000000000
--- a/modules/fts/pages/fts-date-time-parsers.adoc
+++ /dev/null
@@ -1,48 +0,0 @@
-= Date/Time Parsers
-
-Custom _date/time parsers_ can be specified to allow matches to be made across date/time formats.
-
-The Search Service expects dates to be in the format specified by RFC-3339 , which is a specific profile of ISO-8601. Since Search queries need to specify dates in RFC-3339 format, the dates that are stored in Full Text Indexes also need to be in RFC-3339 format.
-
-A date/time parser tells the Search Service ahead of time what the date/time layout of a field will be. If the Type of a child field is set to `datetime`, and the time layout(s) found in that field is not in RFC-3339 format, then you can specify a custom date/time parser that contains the layouts that the Search Service is to expect.
-
-To add a custom date/time parser to a Full Text Index via the Couchbase Capella UI, the following permissions are required:
-
-* You must have `Project View` privileges for the project that contains the cluster. 
-
-* You must have a database user associated with your organization user account. The database user must have Read/Write permissions for the bucket on which the index was created.
-
-* Date/time parsers can be viewed and modified from the index’s configuration page, under the *Index Settings* section. 
-+
-Any date/time parsers that are configured for the current index can be viewed by expanding the *Date/Time Parsers* panel. If no date/time parsers have been configured for the index, the *Date/Time Parsers* panel will be empty.
-
-//[#fts_date_time_parser_initial]
-//image::fts-date-time-parsers-empty.png[,300,align=left]
-
-== Add Date/Time Parsers
-
-_Date/Time Parsers_ can be specified to allow matches to be made across different formats:
-
-To add the date/time parser 
-
-Left click the *+ Add Date/Time Parser* 
-
-[#fts_date_time_parser_initial]
-image::fts-date-time-parser-initial.png[,720,align=left]
-
-The *Customer Date/Time Parser* dialog appears.
-
-[#fts_custom_date_time_parser_dialog]
-image::fts-custom-date-time-parser-dialog.png[,420,align=left]
-
-Enter a suitable name for the custom parser into the *Name* field.
-
-Left-click on the *+ Add* button to successively add the _layouts_ for the parser in the interactive field below the *Layouts* field, by  after each one: 
-
-This adds the layout to a list of layouts displayed in the *Layouts* field.
-
-To remove any of these, select its name in the *Layouts* field, and left-click on the *Remove* button.
-When the list is complete, left-click on the *Save* button to save.
-
-Documentation on using the _Go Programming Language_ to specify _layouts_ is provided on the page http://golang.org/pkg/time/[Package time^].
-In particular, see the section http://golang.org/pkg/time/#Parse[func Parse^].
\ No newline at end of file
diff --git a/modules/fts/pages/fts-default-settings.adoc b/modules/fts/pages/fts-default-settings.adoc
deleted file mode 100644
index 3e89d7b33d..0000000000
--- a/modules/fts/pages/fts-default-settings.adoc
+++ /dev/null
@@ -1,46 +0,0 @@
-= Default Settings
-
-Default settings can be specified in the *Advanced* panel. When opened, the Advanced panel appears as follows:
-
-[#fts_advanced_panel]
-image::fts-advanced-panel.png[,420,align=left]
-
-The Advanced panel provides the following options:
-
-== Default Type
-
-The default type for documents in the selected bucket or scope and collection. The default value for this field is `default`.
-
-== Default Analyzer
-
-This is the default analyzer to be used. The default value is `standard`.
-
-The default analyzer is applicable to all the text fields across type mappings unless explicitly overridden.
-
-It is the _standard_ analyzer in which analysis is done by the means of the Unicode tokenizer, the to_lower token filter, and the stop token filter.
-
-== Default Date/Time Parser
-
-This is the default date/time parser to be used.
-
-The default datetime parser is applicable to all the datetime fields across the type mappings unless explicitly overridden.
-
-The default value is `dateTimeOptional`.
-
-== Default Field
-
-Indexed fields need to have this option selected to support `include in _all`, where _all is the composite field.
-
-The default value is `_all`.
-
-== Store Dynamic Fields
-
-This option, when selected, ensures the inclusion of field  content in returned results. Otherwise, the field content is not included in the result.
-
-== Index Dynamic Fields
-
-This option, When selected, ensures that the dynamic fields are indexed. Otherwise, the dynamic fields are not indexed.
-
-== DocValues for Dynamic Fields
-
-This option, When selected, ensures that the values of the dynamic fields are included in the index. Otherwise, the dynamic field values are not included in the index.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-index-analyzers.adoc b/modules/fts/pages/fts-index-analyzers.adoc
deleted file mode 100644
index ca2a5cc843..0000000000
--- a/modules/fts/pages/fts-index-analyzers.adoc
+++ /dev/null
@@ -1,481 +0,0 @@
-[#Understanding-Analyzers]
-= Understanding Analyzers
-:page-aliases: using-analyzers.adoc,fts-analyzers.adoc,fts-using-analyzers.adoc
-
-[abstract]
-Analyzers increase search-awareness by transforming input text into token-streams, which permit the management of richer and more finely controlled forms of text-matching.
-An analyzer consists of modules, each of which performs a particular, sequenced role in the transformation.
-
-[#principles-of-text-analysis]
-== Principles of Text-Analysis
-
-_Analyzers_ pre-process input-text submitted for Full Text Search; typically, by removing characters that might prohibit certain match-options.
-
-The analysis is performed on document-contents when indexes are created and is also performed on the input-text submitted for a search.
-The benefit of analysis is often referred to as _language awareness_.
-
-For example, if the input-text for a search is `enjoyed staying here`, and the document-content contains the phrase `many enjoyable activities`, the dictionary-based words do not permit a match.
-However, by using an analyzer that (by means of its inner _Token Filter_ component) _stems_ words, the input-text yields the tokens `enjoy`, `stay`, and `here`; while the document-content yields the tokens `enjoy` and `activ`.
-By means of the common token `enjoy`, this permits a match between `enjoyed` and `enjoyable`.
-
-Since different analyzers pre-process text in different ways, effective Full Text Search depends on the right choice of the analyzer, for the type of matches that are desired.
-
-Couchbase Full Text Search provides a number of pre-constructed analyzers that can be used with Full Text Indexes.
-Additionally, analyzers can be custom-created by means of the Couchbase Web Console.
-The remainder of this page explains the architecture of analyzers and describes the modular components that Couchbase Full Text Search makes available for custom-creation.
-It also lists the pre-constructed analyzers that are available and describes the modules that they contain.
-
-For examples of both selecting and custom-creating analyzers by means of the Couchbase Web Console.
-
-[#analyzer-architecture]
-== Analyzer Architecture
-
-Analyzers are built from modular components:
-
-* *Character Filters* remove undesirable characters from input: for example, the `html` character filter removes HTML tags, and indexes HTML text-content alone.
-
-* *Tokenizers* split input-strings into individual _tokens_, which together are made into a _token stream_.
-The nature of the decision-making whereby splits are made, differs across tokenizers.
-
-* *Token Filters* are chained together, with each performing additional post-processing on each token in the stream provided by the tokenizer.
-This may include reducing tokens to the _stems_ of the dictionary-based words from which they were derived, removing any remaining punctuation from tokens, and removing certain tokens deemed unnecessary.
-
-Each component-type is described in more detail below. You can use these components to create custom analyzers from the Couchbase Web Console.
-
-NOTE: If you have configured an analyzer for your field, you cannot specify any other analyzer for the field in the search request. However, if you do not specify any analyzer in your search query, the query will automatically choose the analyzer that was used for indexing.
-
-[#Character-Filters]
-=== Character Filters
-
-_Character Filters_ remove undesirable characters.
-
-The following filters are available:
-
-* *asciifolding*: Converts characters that are in the first 127 ASCII characters (`basic latin` unicode block) into their ASCII equivalents.
-
-* *html*: Removes html elements such as ``.
-
-* *regexp*: Uses a regular expression to match characters that should be replaced with the specified replacement string.
-
-* *zero_width_spaces*: Substitutes a regular space-character for each _zero-width non-joiner_ space.
-
-[#Tokenizers]
-=== Tokenizers
-
-Tokenizers split input-strings into individual tokens: characters likely to prohibit certain kinds of matching (for example, spaces or commas) are omitted.
-
-The tokens so created are then made into a _token stream_ for the query.
-
-The following tokenizers are available from the Couchbase Web Console:
-
-* *Hebrew*: Creates tokens by breaking input-text into subsets that consist of hebrew letters only: characters such as punctuation-marks and numbers are omitted.
-+
-This does a restricted set of operations on the punctuations for example, it can’t combine the two geresh into a single gershayim (both of which are used as punctuations)
-
-* *Letter*: Creates tokens by breaking input-text into subsets that consist of letters only: characters such as punctuation-marks and numbers are omitted.
-+
-The creation of a token ends whenever a non-letter character is encountered.
-For example, the text `Reqmnt: 7-element phrase` would return the following tokens: `Reqmnt`, `element`, and `phrase`.
-
-* *Single*: Creates a single token from the entirety of the input-text.
-For example, the text `in each place` would return the following token: `in each place`.
-+
-NOTE: This may be useful for handling URLs or email addresses, which can thus be prevented from being broken at punctuation or special-character boundaries.
-+
-It may also be used to prevent multi-word phrases (for example, place names such as `Milton Keynes` or `San Francisco`) from being broken up due to whitespace; so that they become indexed as a single term.
-
-* *Unicode*: Creates tokens by performing _Unicode Text Segmentation_ on word-boundaries, using the https://github.com/blevesearch/segment[segment^] library.
-+
-For examples, see http://www.unicode.org/reports/tr29/#Word_Boundaries[Unicode Word Boundaries^].
-
-* *Web*: It identifies email addresses, URLs, Twitter usernames and hashtags, and attempts to keep them intact, indexed as individual tokens.
-
-* *Whitespace*: Creates tokens by breaking input-text into subsets according to where whitespace occurs.
-+
-For example, the text `in each place` would return the following tokens: `in`, `each`, and `place`.
-
-[#Token-Filters]
-=== Token Filters
-
-_Token Filters_ accept a token-stream provided by a tokenizer and make modifications to the tokens in the stream.
-
-A frequently used form of token filtering is _stemming_; this reduces words to a base form that typically consists of the initial _stem_ of the word (for example, `play`, which is the stem of `player`, `playing`, `playable`, and more).
-
-With the stem used as the token, a wider variety of matches can be made (for example, the input-text `player` can be matched with the document-content `playable`).
-
-The following kinds of token-filtering are supported by Couchbase Full Text Search:
-
-* *apostrophe*: Removes all characters after an apostrophe and the apostrophe itself. For example, `they've` becomes `they`.
-
-* *camelCase*: Splits camelCase text to tokens.
-
-* *dict_compound*: Allows user-specification of a dictionary whose words can be combined into compound forms, and individually indexed.
-
-* *edge_ngram*: From each token, computes https://en.wikipedia.org/wiki/N-gram[n-grams^] that are rooted either at the front or the back.
-
-* *elision*: Identifies and removes characters that prefix a term and are separated from it by an apostrophe.
-+
-For example, in French, `l'avion` becomes `avion`.
-
-* *mark_he*: Marks the hebrew, non-hebrew and numeric tokens in the tokenstream.
-
-* *niqqud_he*: Ensures that niqqudless spelling for the further hebrew analysis.
-
-* *lemmatizer_he*: Lemmatizes/gets similar forms of the hebrew words, and if necessary handles spelling mistakes to a certain extent using yud and vav as part of tolerance process.
-
-* *keyword_marker*: Identifies keywords and marks them as such. These keywords are then ignored by any downstream stemmer.
-
-* *length*: Removes tokens that are too short or too long for the stream.
-
-* *to_lower*: Converts all characters to lower case.
-
-* *ngram*: From each token, computes https://en.wikipedia.org/wiki/N-gram[n-grams^].
-+
-There are two parameters, which are the minimum and maximum n-gram length.
-
-* *reverse*: Simply reverses each token.
-
-* *shingle*: Computes multi-token shingles from the token stream.
-+
-For example, the token stream `the quick brown fox`, when configured with a shingle minimum and a shingle maximum length of 2, produces the tokens `the quick`, `quick brown`, and `brown fox`.
-
-* *stemmer_porter*: Transforms the token stream as per the https://tartarus.org/martin/PorterStemmer/[porter stemming algorithm^].
-
-* *stemmer_snowball*: Uses http://snowball.tartarus.org/[libstemmer^] to reduce tokens to word-stems.
-
-* *stop_tokens*: Removes from the stream tokens considered unnecessary for a Full Text Search. For example, `and`, `is`, and `the`. For example, `HTML` becomes `html`.
-
-* *truncate*: Truncates each token to a maximum-permissible token-length.
-
-* *normalize_unicode*: Converts tokens into http://unicode.org/reports/tr15/[Unicode Normalization Form^].
-
-* *unique*: Only indexes unique tokens during analysis.
-
-NOTE: The token filters are frequently configured according to the special characteristics of individual languages.
-Couchbase Full Text Search provides multiple language-specific versions of the *elision*, *normalize*, *stemmer*, *possessive*, and *stop* token filters.
-
-The following table lists the specially supported languages for token filters.
-
-.Supported Token-Filter Languages
-[[token_filter_languages_5.5]]
-[cols="1,4"]
-|===
-| Name | Language
-
-| ar
-| Arabic
-
-| bg
-| Bulgarian
-
-| ca
-| Catalan
-
-| cjk
-| Chinese {vbar} Japanese {vbar} Korean
-
-| ckb
-| Kurdish
-
-| da
-| Danish
-
-| de
-| German
-
-| el
-| Greek
-
-| en
-| English
-
-| es
-| Spanish (Castilian)
-
-| eu
-| Basque
-
-| fa
-| Persian
-
-| fi
-| Finnish
-
-| fr
-| French
-
-| ga
-| Gaelic
-
-| gl
-| Spanish (Galician)
-
-| he
-| Hebrew
-
-| hi
-| Hindi
-
-| hu
-| Hungarian
-
-| hr
-| Croatian
-
-| hy
-| Armenian
-
-| id, in
-| Indonesian
-
-| it
-| Italian
-
-| nl
-| Dutch
-
-| no
-| Norwegian
-
-| pt
-| Portuguese
-
-| ro
-| Romanian
-
-| ru
-| Russian
-
-| sv
-| Swedish
-
-| tr
-| Turkish
-|===
-
-[#Creating-Analyzers]
-== Creating Analyzers
-
-Analyzers increase search-awareness by transforming input text into token-streams, which permit the management of richer and more finely controlled forms of text-matching.
-
-An analyzer consists of modules, each of which performs a particular role in the transformation (for example, removing undesirable characters, transforming standard words into stemmed or otherwise modified forms, referred to as tokens, and performing miscellaneous post-processing activities).
-
-For more information on analyzers, see
-xref:fts-analyzers.adoc[Understanding Analyzers].
-
-A default selection of analyzers is made available from the pull-down menu provided by the *Type Mappings* interface discussed above. Additional analyzers can be custom-created, by means of the *Analyzers* panel, which appears as follows:
-
-To create a new analyzer, left-click on the *+ Add Analyzer* button.
-
-[#fts_analyzers_panel_initial]
-image::fts-analyzers-panel-initial.png[,700,align=left]
-
-The *Custom Analyzer* dialog appears:
-
-[#fts_custom_analyzer_dialog_initial]
-image::fts-custom-analyzer-dialog-initial.png[,500,align=left]
-
-The dialog contains four interactive panels.
-
-* *Name:* A suitable, user-defined name for the analyzer.
-
-* *Character Filters:* One or more available character filters. (These strip out undesirable characters from input: for example, the `html` character filter removes HTML tags, and indexes HTML text-content alone.) To select from the list of available character filters, use the pull-down menu:
-+
-[#fts_analyzers_panel_select_char_filter]
-image::fts-analyzers-panel-select-char-filter.png[,500,align=left]
-+
-Following addition of one character filter, to add another, left-click on the *+ Add* button, to the right of the field.
-+
-For an explanation of character filters, see the section in xref:#Character-Filters[Understanding Analyzers].
-
-* *Tokenizer:* One of the available tokenizers. (These split input-strings into individual tokens, which together are made into a token stream. Typically, a token is established for each word.) The default value is `unicode`. To select from a list of all tokenizers available, use the pull-down menu:
-+
-[#fts_add_tokenizer_pulldown]
-image::fts-add-tokenizer-pulldown.png[,500,align=left]
-+
-For more information on tokenizers, see the section in xref:#Tokenizers[Understanding Analyzers].
-
-* *Token Filter:* One or more of the available token filters. (When specified, these are chained together, to perform additional post-processing on the token stream.) To select from the list of available filters, use the pull-down menu:
-+
-[#fts_analyzers_panel_select_token_filter]
-image::fts-analyzers-panel-select-token-filter.png[,500,align=left]
-+
-Following addition of one token filter, to add another, left-click on the *+ Add* button, to the right of the field.
-+
-For more information on token filters, see the section in xref:#Token-Filters[Understanding Analyzers].
-
-When these fields have been appropriately completed, save by left-clicking on the *Save* button. On the *Edit Index* screen, the newly defined analyzer now appears in the *Analyzers* panel, with available options displayed for further editing, and deleting. For example:
-
-[#fts_analyzers_panel_subsequent]
-image::fts-analyzers-panel-subsequent.png[,700,align=left]
-
-[#Pre-Constructed-Analyzers]
-== Pre-Constructed Analyzers
-
-The user can select several pre-constructed analyzers available in the Couchbase Web Console. Refer to Creating Indexes for more examples of selection see xref:fts-creating-indexes.adoc[Creating Indexes].
-
-The four basic pre-constructed analyzers are demonstrated below via an online tool https://bleveanalysis.couchbase.com/analysis:
-
-. *Keyword*: This analyzer creates a single token representing the entire input. It forces exact matches and preserves characters such as spaces.
-+
-For example, the text “the QUICK brown fox jumps over the lazy Dog” phrase returns the following tokens:
-+
-image::fts-pre-constructed-analysers-keyword.png[,700,align=left]
-
-. *Simple*: The simple analyzer uses the Letter tokenizer, which keeps letters only. The Letter tokenizer creates tokens by breaking input text into subsets consisting of only letters. It omits characters such as punctuation marks and numbers. It ends the token creation when it encounters a non-letter character.
-+
-For example, the text “the QUICK brown fox jumps over the lazy Dog” phrase returns the following tokens:
-+
-image::fts-pre-constructed-analysers-simple.png[,700,align=left]
-
-. *Standard*: The standard analyzer uses the Unicode tokenizer, the `to_lower` token filter, and the stop token filter for analysis.
-
-* *Unicode*: It creates tokens by performing Unicode Text Segmentation on word-boundaries, using the xref::https://github.com/blevesearch/segment[segment] library.
-+
-Token Filters accept a token-stream provided by a tokenizer and modify the tokens in the stream. E.g, stop word filtering and lower casing.
-
-* *to_lower filter*: It converts all characters to the lower case. For example, HTML becomes html.
-
-* *stop_token filter*: It removes words such as ‘and’, ‘is’, and ‘the’.
-+
-For example, the text “The QUICK Brown Fox Jumps Over The Lazy Dog” phrase returns the following tokens:
-+
-image::fts-pre-constructed-analysers-standard.png[,700,align=left]
-+
-NOTE: Analyzers - Reserve Words
-The ‘standard’ analyzer removes stop words defined by the English language and special characters. If the user wants the stop words and special characters to be searchable, then the user will need to use a pre-constructed “simple” analyzer.
-
-. *Web*: The web analyzer identifies email addresses, URLs, Twitter usernames and hashtags, and attempts to keep them intact, indexed as individual tokens.
-+
-For example, the web analyzer identifies the email address and keeps it intact, indexed as individual token.
-+
-image::fts-pre-constructed-analysers-web.png[,750,align=left]
-
-[#Supported-Languages]
-=== Support Analyzer Languages
-The Search Service has pre-built analyzers for the following languages:
-
-[[analyzer_languages_5.5]]
-[cols="1,4"]
-|===
-| Name | Language
-
-| ar
-| Arabic
-
-| cjk
-| Chinese {vbar} Japanese {vbar} Korean
-
-| ckb
-| Kurdish
-
-| da
-| Danish
-
-| de
-| German
-
-| en
-| English
-
-| es
-| Spanish (Castilian)
-
-| fa
-| Persian
-
-| fi
-| Finnish
-
-| fr
-| French
-
-| he
-| Hebrew
-
-| hi
-| Hindi
-
-| hu
-| Hungarian
-
-| hr
-| Croatian
-
-| it
-| Italian
-
-| nl
-| Dutch
-
-| no
-| Norwegian
-
-| pt
-| Portuguese
-
-| ro
-| Romanian
-
-| ru
-| Russian
-
-| sv
-| Swedish
-
-| tr
-| Turkish
-|===
-
-== Analyzers - Search Functions
-
-xref:n1ql:n1ql-language-reference/searchfun.adoc[Search functions] allow users to execute full text search requests within a {sqlpp} query.
-
-In the context of {sqlpp} queries, a full text search index can be described as one of the following :
-
-* xref:n1ql:n1ql-language-reference/covering-indexes.adoc[Covering index]
-
-* Non-covering index
-
-This characterization depends on the extent to which it could answer all aspects of the SELECT predicate and the WHERE clauses of a {sqlpp} query.
-A {sqlpp} query against a non-covering index will go through a "verification phase.” In this phase,  documents are fetched from the query service based on the results of the search index, and the documents are validated as per the clauses defined in the query.
-
-For example, an index with only the field `field1` configured is considered a non-covering index for a query `field1=abc` and `field2=xyz`.
-
-== Use case
-
-Consider a use case where a user has defined a special analyzer for a field in their full text search index. The following can be expected:
-
-. If the query does not use the same analyzer as specified in the full text search index, the query will not be allowed to run.
-
-. By default, the analyzer used for indexing the field (as per the index definition) will be picked up if no analyzer is specified in the analytic query.
-
-. If the index is a non-covering index for an analytic query and the user has not specified an explicit analyzer to be used, the verification phase might drop documents that should have been returned as results due to lack of query context.
-
-The user can explicitly specify the search query context in the following three ways:
-
-. Explicitly specify the analyzer to use in the query (to match with that specified in the index).
-+
-Example 1
-+
-....
-SEARCH(keyspace, {"match": "xyz", "field": "abc", "analyzer": "en"})
-....
-
-. Specify index name within the options argument of the SEARCH function, so this index’s mapping is picked up during the verification process
-+
-Example 2
-+
-....
-SEARCH(keyspace, {"match": "xyz", "field": "abc"}, {"index": "fts-index-1"})
-....
-
-. Specify the index mapping itself as a JSON object within the options argument of the SEARCH function, which is used directly for the verification process
-+
-Example 3
-+
-....
-SEARCH(keyspace, {"match": "xyz", "field": "abc"}, {"index": {.......})
-....
-
-NOTE: If users fail to provide this query context for non-covering queries, they may see incorrect results, including dropped documents, especially while using non-standard and custom analyzers.
diff --git a/modules/fts/pages/fts-index-partitions.adoc b/modules/fts/pages/fts-index-partitions.adoc
deleted file mode 100644
index 82fb59249f..0000000000
--- a/modules/fts/pages/fts-index-partitions.adoc
+++ /dev/null
@@ -1,34 +0,0 @@
-= Index Partitioning
-
-_Index Partitioning_ increases query performance by dividing and spreading a large index of documents across multiple nodes. This feature is available only in Couchbase Server Enterprise Edition.
-
-The benefits include:
-
-* The ability to scale out horizontally as index size increases.
-
-* Transparency to queries, requiring no change to existing queries.
-
-* Reduction of query latency for large, aggregated queries; since partitions can be scanned in parallel.
-
-* Provision of a low-latency range query while allowing indexes to be scaled out as needed.
-
-== Index Partitions
-
-The *Index Partitions* interface provides a section to enter the number of partitions the index is to be split into:
-
-[#fts_index_partitions_interface]
-image::fts-index-partitions-interface.png[,300,align=left]
-
-The default option for this setting is 1. Note that this number represents the number of active partitions for an index, and the active partitions are distributed across all the nodes in the cluster where the search service is running.
-
-NOTE: The type of index is saved in its JSON definition, which can be previewed in the _Index Definition Preview_ panel, at the right-hand side.
-
-See xref:fts-creating-index-from-UI-classic-editor.adoc#using-the-index-definition-preview[Using the Index Definition Preview].
-
-[source,javascript]
-----
-"planParams": {
-  "numReplicas": 0,
-  "indexPartitions": 6
-},
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-index-replicas.adoc b/modules/fts/pages/fts-index-replicas.adoc
deleted file mode 100644
index 3578a73902..0000000000
--- a/modules/fts/pages/fts-index-replicas.adoc
+++ /dev/null
@@ -1,18 +0,0 @@
-= Index Replicas
-:page-aliases: fts-search-response-index-partition.adoc
-
-Index Replicas support availability: if an Index Service-node is lost from the cluster, its indexes may exist as replicas on another cluster-node that runs the Index Service.
-
-If an active index is lost, a replica is promoted to active status, and use of the index is uninterrupted.
-
-The *Index Replicas* interface allows up to three index replicas to be selected, from a pull-down menu:
-
-[#fts_index_replicas_interface]
-image::fts-index-replicas-interface.png[,250,align=left]
-
-Each replica partition exists on a node, separate from its active counterpart and from any other replica of that active partition. The user cannot add more than the permitted number of replicas by the current cluster configuration. If the user tries to add more replicas it will result in an error message.
-
-[#fts_index_replicas_error_message]
-image::fts-index-replicas-error-message.png[,220,align=left]
-
-The above error implies that there are not enough search nodes in the cluster to support the configured number of replicas.
diff --git a/modules/fts/pages/fts-index-type.adoc b/modules/fts/pages/fts-index-type.adoc
deleted file mode 100644
index 8aec04d394..0000000000
--- a/modules/fts/pages/fts-index-type.adoc
+++ /dev/null
@@ -1,37 +0,0 @@
-= Index Type
-
-The *Index Type* interface provides a drop-down menu from which the appropriate index type can be selected:
-
-[#index_type_interface_image]
-image::fts-index-type-interface.png[,300,align=left]
-
-Following options are available: 
-
-* *Version 5.0 (Moss)* is the standard form of index to be used in test, development, and production. This version is deprecated.
-
-* *Version 6.0 (Scorch)* reduces the size of the index-footprint on disk and provides enhanced performance for indexing and mutation-handling
-
-NOTE: The type of an index is saved in its JSON definition, which can be previewed in the _Index Definition Preview panel_, at the right-hand side.
-
-== Example
-
-Version 5.0 contained the following value for the store attribute:
-
-[source,Javascript]
-----
-
-"store": {
-  "kvStoreName": "mossStore"
-},
-----
-
-Version 6.0 and later contains a different value:
-
-[source,javascript]
-----
-
-"store": {
-  "kvStoreName": "",
-  "indexType": "scorch"
-},
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-introduction.adoc b/modules/fts/pages/fts-introduction.adoc
deleted file mode 100644
index ebb1e8fa01..0000000000
--- a/modules/fts/pages/fts-introduction.adoc
+++ /dev/null
@@ -1,135 +0,0 @@
-= Introduction to Full Text Search
-:page-aliases: full-text-intro.adoc
-
-[abstract]
-_Full Text Search_ (FTS) lets you create, manage, and query _indexes_, defined on JSON documents within a Couchbase bucket.
-
-== Full Text Search
-
-Provided by the xref:learn:services-and-indexes/services/search-service.adoc[Search Service], full text search (FTS) enables the users to create, manage, and query multi-purposed indexes defined on JSON documents within a Couchbase bucket.
-
-In addition to exact matches, the full-text index can perform various search functions based on matching given terms/search parameters.
-
-Couchbase’s Global Secondary Indexes (GSI) can be used for range scans and regular pattern search, whereas FTS offers extensive capabilities for natural-language querying. 
-
-
-[#fundamentals-of-full-text-search]
-== Full Text Search: Fundamentals
-
-Every Full Text Search is performed on a user-created _Full Text Index_, which contains the targets on which searches are to be performed: these targets are values derived from the textual and other contents of documents within a specified bucket.
-
-[#features-of-full-text-search]
-== Features of Full Text Search
-
-_Full Text Search_ provides Google-like search capability on JSON documents.
-Couchbase's Global Secondary Indexes (GSI) can be used for range scans and regular pattern search, whereas FTS provides extensive capabilities for natural-language querying.
-The query below looks for documents with all of the strings ("paris", "notre", "dame").
-
-=== Example
-
-[source,json]
-----
-{
-  "explain": false,
-  "fields": [
-    "*"
-  ],
-  "highlight": {},
-  "query": {
-    "query": "+paris +notre +dame"
-   }
-}
-----
-
-This query returns the following result (shown partially) from the FTS index scan on the travel-sample sample bucket.
-For each matched document, the hits field shows the document id, the score, the fields in which a matched string occurs, and the position of the matched string.
-
-[source,json]
-----
-"hits": [
-    {
-      "index": "trsample_623ab1fb8bfc1297_6ddbfb54",
-      "id": "landmark_21603",
-      "score": 2.1834097375254955,
-      "locations": {
-        "city": {
-          "paris": [
-            {
-              "pos": 1,
-              "start": 0,
-              "end": 5,
-              "array_positions": null
-            }
-          ]
-        },
-        "content": {
-          "dame": [
-            {
-              "pos": 23,
-              "start": 169,
-              "end": 173,
-              "array_positions": null
-            },
-...
-]
-----
-
-Examples of natural language support include:
-
-* _Language-aware_ searching; allowing users to search for, say, the word `traveling`, and additionally obtain results for `travel` and `traveler`.
-* xref:fts-scoring.adoc[_Scoring_] of results, according to relevancy; allowing users to obtain result-sets that only contain documents awarded the highest scores.
-This keeps result-sets manageably small, even when the total number of documents returned is extremely large.
-
-== Stages of Full text search query
-A Full Text Search query , once built at the client, can be targeted to any server in the Couchbase cluster hosting the search service. 
-
-Here are the stages it goes through:
-
-. The server that the client targets the search request to, assumes the role of the orchestrator or the coordinating node once it receives the external request.
-
-. The coordinating node first looks up the index (making sure it exists).
-
-. The coordinating node obtains the “plan” that the index was deployed with. The plan contains details on how many partitions the index was split into and all the servers’ information where any of these partitions reside.
-
-. The coordinating node sets up a unique list of servers that it needs to dispatch an “internal” request to. A server in the Couchbase cluster is eligible if and only if it hosts a partition belonging to the index under consideration.
-
-. Once the internal requests have been dispatched by the coordinating node to each of the servers, it’ll wait to hear back from them. Simultaneously, if any of the index’s partitions are resident on the coordinating node – search requests are dispatched to each of those partitions as well (disk-bound).
-
-. Those servers in the cluster that receive the “internal” request from the coordinating node will forward it to each of the index partitions they host (disk-bound).
-
-. Separate search requests that are dispatched concurrently to all index partitions resident within a server, and the server waits to hear back from them.
-
-. Once the server hears back from all the partitions it hosts, it merges the results obtained from each of the partitions before packaging them into a response and shipping it back to the coordinating node.
-
-The coordinating node waits for responses from:
-
-** each of the index partitions resident within the node
-** each of the servers in the cluster that it dispatched the internal request to
-
-Once all the results from the local index partitions and the remote index partitions are obtained, the coordinating node merges all of them, packages them into a response, and ships them back to the client where the request originated.
-
-Full Text Search is powered by http://www.blevesearch.com/[Bleve^], an open source search and indexing library written in _Go_.
-Full Text Search uses Bleve for the indexing of documents and also makes available Bleve’s extensive range of _query types_.
-These include:
-
-* xref:fts-supported-queries-match.adoc[Match], xref:fts-supported-queries-match-phrase.adoc[Match Phrase]
-* xref:fts-supported-queries-DocID-query.adoc[DocId Query], and xref:fts-supported-queries-prefix-query.adoc[Prefix Query]
-* xref:fts-supported-queries-conjuncts-disjuncts.adoc[Conjuncts & Disjuncts], and xref:fts-supported-queries-boolean-field-query.adoc[Boolean] 
-* xref:fts-supported-queries-numeric-range.adoc[Numeric Range] and xref:fts-supported-queries-date-range.adoc[Date Range] 
-* xref:fts-supported-queries-geo-spatial.adoc[Geospatial] queries
-* xref:fts-supported-queries-query-string-query.adoc[Query String Query] which employ a special syntax to express the details of each query.
-* xref:fts-supported-queries-fuzzy.adoc[Fuzzy]
-* xref:fts-supported-queries-regexp.adoc[Regexp]
-* xref:fts-supported-queries-wildcard.adoc[Wildcard]
-* xref:fts-supported-queries-boosting-the-score-query.adoc[Boosting the Score]
-
-Full Text Search includes pre-built text analyzers for multiple languages.  For the current list of all supported languages in Couchbase Server refer to: xref:fts-index-analyzers.adoc#Supported-Languages[Supported Analyzer Languages]
-
-== Authorization for Full Text Search
-
-To access Full Text Search, users require appropriate _roles_.
-The role *FTS Admin* must therefore be assigned to those who intend to create indexes; and the role *FTS Searcher* to those who intend to perform searches.
-For information on creating users and assigning roles, see xref:learn:security/authorization-overview.adoc[Authorization].
-
-// == FTS Application
-// #Need Information#
diff --git a/modules/fts/pages/fts-manage-index-lifecycle.adoc b/modules/fts/pages/fts-manage-index-lifecycle.adoc
deleted file mode 100644
index 404f41c2fc..0000000000
--- a/modules/fts/pages/fts-manage-index-lifecycle.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-= Manage Index Lifecycle
-
-Full Text Indexes, once created can be cloned, edited and/or deleted. They are accessed from the *Search* tab: left-click on this to display the *Full Text Search* panel, which contains a tabular presentation of currently existing indexes, with a row for each index.
-
-(See xref:fts-searching-from-the-UI.adoc[Searching from the UI] for a full illustration.)
-
-To manage an index, left-click on its row. The row expands, as follows:
-
-[#fts_index_management_ui]
-image::fts-index-management-ui.png[,820,align=left]
-
-== Edit Index
-
-* [.ui]*Edit* brings up the *Edit Index* screen, which allows the index to be modified. Saving modifications cause the index to be rebuilt.
-
-"Quick Edit" that goes to the quick editor for an index definition also results in the same functionalities.
-
-NOTE: Both the [.ui]*Edit Index* and [.ui]*Clone Index* screens are in most respects the same as the [.ui]*Add Index* screen, which was itself described in xref:fts-searching-from-the-UI.adoc[Searching from the UI].
-
-== Delete Index
-
-* [.ui]*Delete* causes the current index to be deleted. Index deletion is an asynchronous process run in the background.
-
-== Clone Index
-
-* [.ui]*Clone* button click brings up the *Clone Index* screen, which allows a copy of the current index to be modified as appropriate and required, and saved under a new name.
diff --git a/modules/fts/pages/fts-multi-collection-behaviour.adoc b/modules/fts/pages/fts-multi-collection-behaviour.adoc
deleted file mode 100644
index f228e54f07..0000000000
--- a/modules/fts/pages/fts-multi-collection-behaviour.adoc
+++ /dev/null
@@ -1,220 +0,0 @@
-= Multi-Collection Behaviour
-
-Couchbase's FTS service is the only service that can create indexes that span collections.
- 
-Multi-Collection Index: A user can search multi-collection indexes in the same way as that of a bucket-based index. Since a multi-collection index contains data from multiple source collections, it is helpful to know the source collection of every document xref:fts-search-response-hits.adoc[hit] in the search result.
- 
-* Users can see the source collection names in the fields section of each document xref:fts-search-response-hits.adoc[hit] under the key _$c. See the image below for an example.
-
-image::fts-multi-collection-behaviour.png[,750,align=left]
-
-* Users can also narrow their full-text search requests to only specific Collection(s) within the multi-Collection index. This focus speeds up searches on a large index.
-
-Below is a sample Collection search request for Collections "airport".
-
-*Example*
-[source,console]
-----
-curl -XPOST -H “Content-Type:application/json” - u 
-: http://localhost:8094/api/index/demoindex/query -d
-
-‘{
-  “explain”: true,
-  “fields”:[
-  “*” 
-  ],
-  “highlight”:{},
-  “query”:{
-    “query”:”france”
-  },
-  “size”:10,
-  “from”:50,
-  “collections”:[“airport”]
-}’
-----
-
-* At search time, there is no validation to determine whether or not a collection with a given name exists. As a result, users won’t receive any validation errors for the incorrect collection names within the search request.
-See the below example:
-
-*Example*
-
-An incorrect collection name “XYZ” is used. 
-
-[source,console]
-----
-
-curl -XPOST -H “Content-Type:application/json” - u 
-: http://localhost:8094/api/index/demoindex/query -d
-‘{
-“query”:{
-“query”:”france”
-},
-“size”:10,
-“from”:50,
-“collections”:[“XYZ”]
-}’
-----
-
-*Result:*
-
-[source,json]
-----
-Result: 
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1 
-  },
-  "request": {
-    "query": {
-      "query": "france"
-    },
-    "size": 10,
-    "from": 50,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "-_score"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21844",
-      "score": 0.8255329922213157,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21652",
-      "score": 0.8236828315727989,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_1364",
-      "score": 0.8232253432142588,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21721",
-      "score": 0.8225069701742189,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21674",
-      "score": 0.8218917130827247,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_35854",
-      "score": 0.8218917094653351,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21847",
-      "score": 0.8212458150010249,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21849",
-      "score": 0.8201164200350234,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_21846",
-      "score": 0.8197896824791812,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "demoindex_6dbcc808a8278714_4c1c5584",
-      "id": "hotel_20421",
-      "score": 0.8191068922164917,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    }
-  ],
-  "total_hits": 141,
-  "max_score": 1.0743017811485551,
-  "took": 999962,
-  "facets": null
-}
-----
-
-== Impact of using Role-Based Access Control
-
-The Couchbase Full Admin can administer Role-Based Access Control (RBAC) roles for full-text search indexes at a Bucket, Scope, or Collection(s) level.
-
-FTS provides two primary roles for managing the access control:
-
-* xref:learn:security/roles.adoc#search-admin[Search Admin]
-* xref:learn:security/roles.adoc#search-reader[Search Reader]
-   
-A user must have at least search reader permissions at the source Bucket or Scope or Collection level to access the FTS index.
-
-NOTE: With multi-collection indexes, the user must have search reader roles for all source collections in order to access a multi-collection index.
-
-== Data lifecycle impact 
-
-Multi-collection indexes are deleted when any of the corresponding source collections are deleted. Therefore, multi-collection indexes are best suited for collections with similar data lifespans.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-perform-searches.adoc b/modules/fts/pages/fts-perform-searches.adoc
deleted file mode 100644
index e8c3c3fead..0000000000
--- a/modules/fts/pages/fts-perform-searches.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-= Performing Searches
-:page-aliases: fts-performing-searches.adoc
-
-Full text searches can be performed with:
-
-* The Couchbase Web Console.
-This UI can also be used to create indexes and analyzers.
-Refer to xref:fts-searching-from-the-ui.adoc[Searching from the UI] for information.
-
-* The Couchbase REST API.
-Refer to xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API] for information.
-Refer also to xref:rest-api:rest-fts.adoc[Full Text Search API] for REST reference details.
-
-* The Couchbase SDK.
-This supports several languages, and allows full text searches to be performed with each.
-Refer to the SDK's xref:java-sdk:concept-docs:full-text-search-overview.adoc[Full Text Search] page for information.
-
-NOTE: The xref:java-sdk:howtos:full-text-searching-with-sdk.adoc[Searching from the SDK] page for the _Java_ SDK provides an extensive code-example that demonstrates multiple options for performing full text searches.
-
-* The {sqlpp} Search functions.
-These enable you to perform a full text search as part of a {sqlpp} query.
-Refer to xref:n1ql:n1ql-language-reference/searchfun.adoc[Search Functions] for information.
diff --git a/modules/fts/pages/fts-query-string-syntax-boosting.adoc b/modules/fts/pages/fts-query-string-syntax-boosting.adoc
deleted file mode 100644
index c318177d29..0000000000
--- a/modules/fts/pages/fts-query-string-syntax-boosting.adoc
+++ /dev/null
@@ -1,32 +0,0 @@
-[#Boosting]
-= Boosting
-
-When you specify multiple query-clauses, you can specify the relative importance to a given clause by suffixing it with the `^` operator, followed by a number or by specifying the `boost` parameter with the number to boost the search.
-
-== Example
-
-[source, json]
-----
-description:pool name:pool^5
-----
-
-The above syntax performs Match Queries for *pool* in both the `name` and `description` fields, but documents having the term in the `name` field score higher.
-
-[source, json]
-----
-"query": {
-    "disjuncts": [
-         {
-      "match": "glossop",
-      "field": "city",
-      "boost": 10
-    },
-         {
-      "match": "glossop",
-      "field": "title"
-    }    
-  ]  
-}
-----
-
-The above syntax performs Match Queries for a city *glossop* in both the `city` and `title` fields, but documents having the term in the `city` field score higher.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-query-string-syntax-date-ranges.adoc b/modules/fts/pages/fts-query-string-syntax-date-ranges.adoc
deleted file mode 100644
index 16fa578270..0000000000
--- a/modules/fts/pages/fts-query-string-syntax-date-ranges.adoc
+++ /dev/null
@@ -1,6 +0,0 @@
-[#Date-Range]
-= Date Range
-
-You can perform date range searches by using the `>`, `>=`, `<`, and `\<=` operators, followed by a date value in quotes.
-
-For example, `created:>"2016-09-21"` will perform a xref:fts-supported-queries-date-range.adoc[date range query] on the `created` field for values after September 21, 2016.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-query-string-syntax-escaping.adoc b/modules/fts/pages/fts-query-string-syntax-escaping.adoc
deleted file mode 100644
index 6f021d3f34..0000000000
--- a/modules/fts/pages/fts-query-string-syntax-escaping.adoc
+++ /dev/null
@@ -1,18 +0,0 @@
-[#Escaping]
-= Escaping
-
-The following quoted-string enumerates the characters which may be escaped:
-
-----
-"+-=&|>`, `>=`, `<`, and `\<=` operators, each followed by a numeric value.
-
-== Example
-
-`reviews.ratings.Cleanliness:>4` performs a xref:fts-supported-queries-numeric-range.adoc[numeric range query] on the `reviews.ratings.Cleanliness` field, for values greater than 4.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-query-string-syntax.adoc b/modules/fts/pages/fts-query-string-syntax.adoc
deleted file mode 100644
index 0a96358d13..0000000000
--- a/modules/fts/pages/fts-query-string-syntax.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-= Query String Syntax
-:page-aliases: query-string-queries.adoc
-
-[abstract]
-Query strings enable you to describe complex queries using a simple syntax.
-
-Using the query string syntax, the following query types can be performed:
-
-* xref:fts:fts-query-string-syntax-boosting.adoc[Boosting]
-* xref:fts:fts-query-string-syntax-date-ranges.adoc[Date Range]
-* xref:fts:fts-query-string-syntax-escaping.adoc[Escaping]
-* xref:fts:fts-query-string-syntax-field-scoping.adoc[Field Scoping]
-* xref:fts:fts-query-string-syntax-match-phrase.adoc[Match Phrase]
-* xref:fts:fts-query-string-syntax-match.adoc[Match  Query Syntax]
-* xref:fts:fts-query-string-syntax-numeric-ranges.adoc[Numeric Range]  
\ No newline at end of file
diff --git a/modules/fts/pages/fts-queryshape-circle.adoc b/modules/fts/pages/fts-queryshape-circle.adoc
deleted file mode 100644
index ef33f2dc6a..0000000000
--- a/modules/fts/pages/fts-queryshape-circle.adoc
+++ /dev/null
@@ -1,468 +0,0 @@
-= Circle Query
-
-[abstract]
-A GeoJSON Circle Query against any GeoJSON type.
-
-== QueryShape for a Circle Query
-
-A GeoJSON query via a GeoShape of Circle to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-A Circle represents a disc shape on the earth’s spherical surface.  This is a Couchbase extension to GeoJSON.
-
-For full details on formats for the radius refer to xref:fts-supported-queries-geojson-spatial.adoc#specifying-distances[Distances]
-
-=== Circle `Intersects` Query
-
-An `intersect` query for the circle returns all the matched documents with shapes that overlap with the area of the circular shape in the query. 
-
-A circle `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the Circle Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-Circle (GeoShape)
-
-| Point
-| Intersects if the point lies within the circular region.
-
-| LineString
-| Intersects if the line cuts/goes through anywhere within the circular region.
-
-| Polygon
-| Intersects if there is an area overlap between the polygon and the circular region in the query.
-
-| MultiPoint
-| Intersects if any of the points lie within the circular region.
-
-| MultiLineString
-| Intersects if any of the lines cut/go through anywhere within the circular region.
-
-| MultiPolygon
-| Intersects if there is an area overlap between any of the polygons in the multipolygon array and the circular region in the query.
-
-| GeometryCollection
-| Intersects if there is an overlap between any of the heterogeneous (above 6) shapes in the geometrycollection array in the document with the query circle.
-
-| Circle
-| Intersects if the area of the circle intersects with the query circle.
-
-| Envelope
-| Intersects if the area of the rectangle intersects with the query circle.
-
-|=== 
-
-=== Circle `Contains` Query
-
-A `contain` query for the circle returns all the matched documents with shapes that completely contain the area of the circular shape in the query. 
-
-A circle `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the Circle Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Circle (GeoShape)
-
-| Point
-| NA. Points can’t cover a circle.
-
-| LineString
-| NA. LineStrings can’t cover a circle.
-
-| Polygon
-| Matches if the polygon area contains the circular region in the query.
-
-| MultiPoint
-| NA. MultiPoints can’t cover a circle.
-
-| MultiLineString
-| NA. MultiLineStrings can’t cover a circle.
-
-| MultiPolygon
-| Matches if any of the polygons in the multipolygon array contains the circular region in the query.
-
-| GeometryCollection
-| Matches if there is a containment between any of the heterogeneous (above 6) shapes in the geometrycollection array in the document with the query circle.
-
-| Circle
-| Matches if the area of the document circle contains the query circle.
-
-| Envelope
-| Matches if the area of the document rectangle contains the query circle.
-
-|===
-
-=== Circle `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for the circle returns all the matched documents with shapes that are completely residing within the area of the circular shape in the query. 
-A circle `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-WithIn rules for the Circle Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Circle (GeoShape)
-
-| Point
-| Matches if the point lies within the circular region.
-
-| LineString
-| Matches if the linestring lies within the circular region.
-
-| Polygon
-| Matches if the polygon area is residing within the query circle.
-
-| MultiPoint
-| Matches if all the points in the array lie within the circular region.
-
-| MultiLineString
-| Matches if all the linestrings in the array lie within the circular region.
-
-| MultiPolygon
-| Matches if every polygon area is residing completely within the circular region in the query. 
-
-| GeometryCollection
-| Matches if there is a complete containment between every heterogeneous (above 6) shapes in the geometrycollection array in the document and the query circle.
-
-| Circle
-| Matches if the document circle resides within the query circle.
-
-| Envelope
-| Matches if the document rectangle resides within the query circle.
-
-|===
-
-== Example Circle Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects if the point lies within the circular region.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "intersects"
-    },
-    "field": "geojson"
-  },
-  "size": 10,
-  "from": 0,
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geojson",
-      "unit": "mi",
-      "location": {
-        "lon": -2.235143,
-        "lat": 53.482358
-      }
-    }
-  ],
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 842 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "circle",
-          "coordinates": [
-            -2.235143,
-            53.482358
-          ],
-          "radiusInMeters": 160934.4
-        },
-        "relation": "intersects"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "hotel_15466",
-      "score": 0.48460386356013374,
-      "sort": [
-        "8 Clarendon Crescent"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_3548",
-      "score": 0.2153234885704102,
-      "sort": [
-        "AMC"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_570",
-      "score": 0.12120554320433605,
-      "sort": [
-        "Abacus Books"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_6350",
-      "score": 0.27197802451106445,
-      "sort": [
-        "Aberconwy House"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "hotel_40",
-      "score": 0.2929891838246811,
-      "sort": [
-        "Aberdovey Hillside Village"
-      ]
-    }
-  ],
-  "total_hits": 842,
-  "max_score": 0.5928042064997198,
-  "took": 24655382,
-  "facets": null
-}
-----
-
-== Example Circle Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches if the document circle resides within the query circle.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "within"
-    },
-    "field": "geoarea"
-  },
-  "size": 10,
-  "from": 0,
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geojson",
-      "unit": "mi",
-      "location": {
-        "lon": -2.235143,
-        "lat": 53.482358
-      }
-    }
-  ],
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 36 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "circle",
-          "coordinates": [
-            -2.235143,
-            53.482358
-          ],
-          "radiusInMeters": 160934.4
-        },
-        "relation": "within"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_577",
-      "score": 0.1543972016608065,
-      "sort": [
-        "Barkston Heath"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_469",
-      "score": 0.5853253239353176,
-      "sort": [
-        "Birmingham"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_514",
-      "score": 0.14663352685195305,
-      "sort": [
-        "Blackpool"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_511",
-      "score": 0.19445510224080859,
-      "sort": [
-        "Brough"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_568",
-      "score": 0.1561033061076272,
-      "sort": [
-        "Church Fenton"
-      ]
-    }
-  ],
-  "total_hits": 36,
-  "max_score": 1.015720869823755,
-  "took": 8549509,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-envelope.adoc b/modules/fts/pages/fts-queryshape-envelope.adoc
deleted file mode 100644
index 254a6df425..0000000000
--- a/modules/fts/pages/fts-queryshape-envelope.adoc
+++ /dev/null
@@ -1,446 +0,0 @@
-= Envelope Query
-
-[abstract]
-A GeoJSON Envelope Query against any GeoJSON type.
-
-== QueryShape for an Envelope Query
-
-A GeoJSON query via a GeoShape of Envelope to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-Also called a bounded rectangle query by specifying +++[[minLon, maxLat], [maxLon, minLat]]+++.  This is a Couchbase extension to GeoJSON.
-
-=== Envelope `Intersects` Query
-
-An `intersect` query for the envelope returns all the matched documents with shapes that overlap with the area of the rectangle shape in the query. 
-
-A envelope `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Envelope",      
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the Envelope Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-Envelope (GeoShape)
-
-| Point
-| Matches if the point lies within the query rectangle region.
-
-| LineString
-| Matches if the linestring intersects or lies within the query rectangle.
-
-| Polygon
-| Matches if the polygon area is overlapping the query rectangle.
-
-| MultiPoint
-| Matches if any of the points in the array lie within the rectangle region.
-
-| MultiLineString
-| Matches if any of the linestrings intersect or lie within the rectangle area.
-
-| MultiPolygon
-| Matches if any of the polygon areas is overlapping the rectangle region.
-
-| GeometryCollection
-| Matches if there is an overlap between any heterogeneous (above 6) shapes in the geometrycollection array in the document and the query rectangle.
-
-| Circle
-| Matches if the area of the query rectangle overlaps the document circle.
-
-| Envelope
-| Matches if the query rectangle overlaps the document rectangle area.
-
-|=== 
-
-=== Envelope `Contains` Query
-
-A `contains` query for the envelope returns all the matched documents with shapes that contain the area of the rectangle shape in the query. 
-
-A envelope `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the Envelope Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Envelope (GeoShape)
-
-| Point
-| NA, Point can’t contain an envelope.
-
-| LineString
-| NA, LineString can’t contain an envelope.
-
-| Polygon
-| Matches if the polygon area is containing the rectangle region in the query. 
-
-| MultiPoint
-| NA, MultiPoint can’t contain an envelope.
-
-| MultiLineString
-| NA, MultiLineString can’t contain an envelope.
-
-| MultiPolygon
-| Matches if any of the polygon areas contains the entire rectangle region.
-
-| GeometryCollection
-| Matches if there is a containment between any heterogeneous (above 6) shapes in the geometrycollection array in the document and the query rectangle.
-
-| Circle
-| Matches if the query rectangle resides within the document circle.
-
-| Envelope
-| Matches if the query rectangle resides within the document rectangle.
-
-|===
-
-=== Envelope `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for the envelope returns all the matched documents with shapes that are contained within the area of the rectangle shape in the query. 
-
-A envelope `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-WithIn rules for the Envelope Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Envelope (GeoShape)
-
-| Point
-| Matches if the point lies within the query rectangle region.
-
-| LineString
-| Matches if the linestring resides completely within the query rectangle. 
-
-| Polygon
-| Matches if the polygon resides completely within the query rectangle. 
-
-| MultiPoint
-| Matches if all the points in the array lie within the query rectangle.
-
-| MultiLineString
-| Matches if all the linestrings lie within the query rectangle area.
-
-| MultiPolygon
-| Matches if all the polygons reside within the query rectangle region.
-
-| GeometryCollection
-| Matches if there is within relation between all the heterogeneous (above 6) shapes in the geometrycollection array in the document and the query rectangle.
-
-| Circle
-| Matches if the document circle resides within the query rectangle.
-
-| Envelope
-| Matches if the document rectangle resides within the query rectangle.
-
-|===
-
-== Example Envelope Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches if the point lies within the query rectangle region.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "Envelope",      
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ]
-      },
-      "relation": "within"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 2024 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "Envelope",
-          "coordinates": [
-            [
-              -2.235143,
-              53.482358
-            ],
-            [
-              28.955043,
-              40.991862
-            ]
-          ]
-        },
-        "relation": "within"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "hotel_1364",
-      "score": 0.05896334942635901,
-      "sort": [
-        "'La Mirande Hotel"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_16144",
-      "score": 0.004703467956838207,
-      "sort": [
-        "02 Shepherd's Bush Empire"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_16181",
-      "score": 0.004703467956838207,
-      "sort": [
-        "2 Willow Road"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_16079",
-      "score": 0.004703467956838207,
-      "sort": [
-        "20 Fenchurch Street"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "landmark_40437",
-      "score": 0.004703467956838207,
-      "sort": [
-        "30 St. Mary Axe"
-      ]
-    }
-  ],
-  "total_hits": 2024,
-  "max_score": 0.12470500060351324,
-  "took": 17259514,
-  "facets": null
-}
-----
-
-== Example Envelope Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches if the area of the query rectangle overlaps the document circle.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "Envelope",
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 293 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "Envelope",
-          "coordinates": [
-            [
-              -2.235143,
-              53.482358
-            ],
-            [
-              28.955043,
-              40.991862
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1372",
-      "score": 0.008758192642105457,
-      "sort": [
-        "Abbeville"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1294",
-      "score": 0.07778849955604289,
-      "sort": [
-        "Aire Sur L Adour"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1329",
-      "score": 0.009493654411662942,
-      "sort": [
-        "Aix Les Bains"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1347",
-      "score": 0.06002598189280991,
-      "sort": [
-        "Aix Les Milles"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_8588",
-      "score": 0.010149143194537646,
-      "sort": [
-        "All Airports"
-      ]
-    }
-  ],
-  "total_hits": 293,
-  "max_score": 0.4253566663133814,
-  "took": 13358586,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-geometrycollection.adoc b/modules/fts/pages/fts-queryshape-geometrycollection.adoc
deleted file mode 100644
index 40fea60a87..0000000000
--- a/modules/fts/pages/fts-queryshape-geometrycollection.adoc
+++ /dev/null
@@ -1,612 +0,0 @@
-= GeometryCollection Query
-
-[abstract]
-A GeoJSON GeometryCollection Query against any GeoJSON type.
-
-== QueryShape for a GeometryCollection Query
-
-A GeoJSON query via a GeoShape of GeometryCollection to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== GeometryCollection `Intersects` Query
-
-An `intersect` query for geometrycollection returns all the matched documents with shapes that overlap with the area of any of the shapes in the geometrycollection array within the query. 
-
-A geometrycollection `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "geometrycollection",
-        "geometries": [{
-          "type": "linestring",
-          "coordinates": [
-            [1.954764, 50.962097],
-            [3.029578, 49.868547]
-          ]
-        }, {
-          "type": "multipolygon",
-          "coordinates": [
-            [
-              [
-                [-114.027099609375, 42.00848901572399],
-                [-114.04907226562499, 36.99377838872517],
-                [-109.05029296875, 36.99377838872517],
-                [-109.05029296875, 40.98819156349393],
-                [-111.060791015625, 40.98819156349393],
-                [-111.02783203125, 42.00848901572399],
-                [-114.027099609375, 42.00848901572399]
-              ]
-            ],
-            [
-              [
-                [-109.05029296875, 37.00255267215955],
-                [-102.041015625, 37.00255267215955],
-                [-102.041015625, 40.9964840143779],
-                [-109.05029296875, 40.9964840143779],
-                [-109.05029296875, 37.00255267215955]
-              ]
-            ]
-          ]
-        }]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-The intersection rules are similar to that of the respective shapes composed within the GeometryCollection array. 
-The rules will be applied to any indexed GeoJSON shape in the array. 
-If any of the query shapes within the geometries array intersects with any of the indexed shapes, then it will be a matching document.
-
-=== GeometryCollection `Contains` Query
-
-A `contains` query for geometrycollection returns all the matched documents with shapes that contain the geometrycollection within the query. 
-
-A geometrycollection `contains` query sample is given below.
-
-[source, json]
-----
-{{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "geometrycollection",
-        "geometries": [{
-          "type": "linestring",
-          "coordinates": [
-            [1.954764, 50.962097],
-            [3.029578, 49.868547]
-          ]
-        }, {
-          "type": "multipolygon",
-          "coordinates": [
-            [
-              [
-                [-114.027099609375, 42.00848901572399],
-                [-114.04907226562499, 36.99377838872517],
-                [-109.05029296875, 36.99377838872517],
-                [-109.05029296875, 40.98819156349393],
-                [-111.060791015625, 40.98819156349393],
-                [-111.02783203125, 42.00848901572399],
-                [-114.027099609375, 42.00848901572399]
-              ]
-            ],
-            [
-              [
-                [-109.05029296875, 37.00255267215955],
-                [-102.041015625, 37.00255267215955],
-                [-102.041015625, 40.9964840143779],
-                [-109.05029296875, 40.9964840143779],
-                [-109.05029296875, 37.00255267215955]
-              ]
-            ]
-          ]
-        }]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-The containment rules are similar to that of the respective shapes composed within the GeometryCollection array. 
-The rules will be applied to any indexed GeoJSON shape in the array. 
-If all of the query shapes within the geometries array contained within any (cumulatively) of the indexed shapes completely, then it will be a matching document.
-
-=== GeometryCollection `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for geometrycollection returns all the matched documents with shapes that contain the geometrycollection within the query. 
-
-A geometrycollection `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "geometrycollection",
-        "geometries": [{
-          "type": "linestring",
-          "coordinates": [
-            [1.954764, 50.962097],
-            [3.029578, 49.868547]
-          ]
-        }, {
-          "type": "multipolygon",
-          "coordinates": [
-            [
-              [
-                [-114.027099609375, 42.00848901572399],
-                [-114.04907226562499, 36.99377838872517],
-                [-109.05029296875, 36.99377838872517],
-                [-109.05029296875, 40.98819156349393],
-                [-111.060791015625, 40.98819156349393],
-                [-111.02783203125, 42.00848901572399],
-                [-114.027099609375, 42.00848901572399]
-              ]
-            ],
-            [
-              [
-                [-109.05029296875, 37.00255267215955],
-                [-102.041015625, 37.00255267215955],
-                [-102.041015625, 40.9964840143779],
-                [-109.05029296875, 40.9964840143779],
-                [-109.05029296875, 37.00255267215955]
-              ]
-            ]
-          ]
-        }]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-The within rules are similar to that of the respective shapes composed within the GeometryCollection array. 
-The rules will be applied to any indexed GeoJSON shape in the array. 
-If any of the query shapes within the geometries array contain any of the indexed shapes completely, then it will be a matching document.
-
-== Example GeometryCollection Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the geometrycollection in the query contains the point in the document including points on the edge or coinciding with the vertices of the geometrycollection.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "geometrycollection",
-        "geometries": [{
-          "type": "linestring",
-          "coordinates": [
-            [1.954764, 50.962097],
-            [3.029578, 49.868547]
-          ]
-        }, {
-          "type": "multipolygon",
-          "coordinates": [
-            [
-              [
-                [-114.027099609375, 42.00848901572399],
-                [-114.04907226562499, 36.99377838872517],
-                [-109.05029296875, 36.99377838872517],
-                [-109.05029296875, 40.98819156349393],
-                [-111.060791015625, 40.98819156349393],
-                [-111.02783203125, 42.00848901572399],
-                [-114.027099609375, 42.00848901572399]
-              ]
-            ],
-            [
-              [
-                [-109.05029296875, 37.00255267215955],
-                [-102.041015625, 37.00255267215955],
-                [-102.041015625, 40.9964840143779],
-                [-109.05029296875, 40.9964840143779],
-                [-109.05029296875, 37.00255267215955]
-              ]
-            ]
-          ]
-        }]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 47 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "geometrycollection",
-          "geometries": [
-            {
-              "type": "linestring",
-              "coordinates": [
-                [
-                  1.954764,
-                  50.962097
-                ],
-                [
-                  3.029578,
-                  49.868547
-                ]
-              ]
-            },
-            {
-              "type": "multipolygon",
-              "coordinates": [
-                [
-                  [
-                    [
-                      -114.027099609375,
-                      42.00848901572399
-                    ],
-                    [
-                      -114.04907226562499,
-                      36.99377838872517
-                    ],
-                    [
-                      -109.05029296875,
-                      36.99377838872517
-                    ],
-                    [
-                      -109.05029296875,
-                      40.98819156349393
-                    ],
-                    [
-                      -111.060791015625,
-                      40.98819156349393
-                    ],
-                    [
-                      -111.02783203125,
-                      42.00848901572399
-                    ],
-                    [
-                      -114.027099609375,
-                      42.00848901572399
-                    ]
-                  ]
-                ],
-                [
-                  [
-                    [
-                      -109.05029296875,
-                      37.00255267215955
-                    ],
-                    [
-                      -102.041015625,
-                      37.00255267215955
-                    ],
-                    [
-                      -102.041015625,
-                      40.9964840143779
-                    ],
-                    [
-                      -109.05029296875,
-                      40.9964840143779
-                    ],
-                    [
-                      -109.05029296875,
-                      37.00255267215955
-                    ]
-                  ]
-                ]
-              ]
-            }
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7001",
-      "score": 0.06568712770601859,
-      "sort": [
-        "Aspen Pitkin County Sardy Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_8854",
-      "score": 0.03222560611574136,
-      "sort": [
-        "Boulder Municipal"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.030963288954845132,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.06475045434251171,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_3567",
-      "score": 0.03222560611574136,
-      "sort": [
-        "Buckley Afb"
-      ]
-    }
-  ],
-  "total_hits": 47,
-  "max_score": 0.23169125425271897,
-  "took": 32362669,
-  "facets": null
-}
-----
-
-== Example GeometryCollection Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when the query geometrycollection intersects the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "geometrycollection",
-        "geometries": [{
-          "type": "linestring",
-          "coordinates": [
-            [1.954764, 50.962097],
-            [3.029578, 49.868547]
-          ]
-        }, {
-          "type": "multipolygon",
-          "coordinates": [
-            [
-              [
-                [-114.027099609375, 42.00848901572399],
-                [-114.04907226562499, 36.99377838872517],
-                [-109.05029296875, 36.99377838872517],
-                [-109.05029296875, 40.98819156349393],
-                [-111.060791015625, 40.98819156349393],
-                [-111.02783203125, 42.00848901572399],
-                [-114.027099609375, 42.00848901572399]
-              ]
-            ],
-            [
-              [
-                [-109.05029296875, 37.00255267215955],
-                [-102.041015625, 37.00255267215955],
-                [-102.041015625, 40.9964840143779],
-                [-109.05029296875, 40.9964840143779],
-                [-109.05029296875, 37.00255267215955]
-              ]
-            ]
-          ]
-        }]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 52 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "geometrycollection",
-          "geometries": [
-            {
-              "type": "linestring",
-              "coordinates": [
-                [
-                  1.954764,
-                  50.962097
-                ],
-                [
-                  3.029578,
-                  49.868547
-                ]
-              ]
-            },
-            {
-              "type": "multipolygon",
-              "coordinates": [
-                [
-                  [
-                    [
-                      -114.027099609375,
-                      42.00848901572399
-                    ],
-                    [
-                      -114.04907226562499,
-                      36.99377838872517
-                    ],
-                    [
-                      -109.05029296875,
-                      36.99377838872517
-                    ],
-                    [
-                      -109.05029296875,
-                      40.98819156349393
-                    ],
-                    [
-                      -111.060791015625,
-                      40.98819156349393
-                    ],
-                    [
-                      -111.02783203125,
-                      42.00848901572399
-                    ],
-                    [
-                      -114.027099609375,
-                      42.00848901572399
-                    ]
-                  ]
-                ],
-                [
-                  [
-                    [
-                      -109.05029296875,
-                      37.00255267215955
-                    ],
-                    [
-                      -102.041015625,
-                      37.00255267215955
-                    ],
-                    [
-                      -102.041015625,
-                      40.9964840143779
-                    ],
-                    [
-                      -109.05029296875,
-                      40.9964840143779
-                    ],
-                    [
-                      -109.05029296875,
-                      37.00255267215955
-                    ]
-                  ]
-                ]
-              ]
-            }
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7001",
-      "score": 0.044156513771700656,
-      "sort": [
-        "Aspen Pitkin County Sardy Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_8854",
-      "score": 0.021237915321935485,
-      "sort": [
-        "Boulder Municipal"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1258",
-      "score": 0.4165991857145269,
-      "sort": [
-        "Bray"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.01797996798708474,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.09702723621245812,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    }
-  ],
-  "total_hits": 52,
-  "max_score": 0.8460432736575045,
-  "took": 18306647,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-linestring.adoc b/modules/fts/pages/fts-queryshape-linestring.adoc
deleted file mode 100644
index a1dc23ac45..0000000000
--- a/modules/fts/pages/fts-queryshape-linestring.adoc
+++ /dev/null
@@ -1,361 +0,0 @@
-= LineString Query
-
-[abstract]
-A GeoJSON LineString Query against any GeoJSON type.
-
-== QueryShape for a LineString Query
-
-A GeoJSON query via a GeoShape of LineString to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== LineString `Intersects` Query
-
-A `contains` query for linestring returns all the matched documents with shapes that intersect the linestring within the query. 
-
-A linestring `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "linestring",
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the LineString Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-LineString (GeoShape)
-
-| Point
-| Intersects when any of the line endpoints overlap the point in the document. 
-
-| LineString
-| Intersects when the linestring in the query intersects the linestring in the document.
-
-| Polygon
-| Intersects when the linestring in the query intersects any of the edges of the polygon in the document.
-
-| MultiPoint
-| Intersects when any of the line endpoints overlap any of the points in the multipoint array in the document.
-
-| MultiLineString
-| Intersects when the linestring in the query intersects any of the linestring in the multilinestring array in the document.
-
-| MultiPolygon
-| Intersects when the linestring in the query intersects any of the edges of any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Matches when the query point overlaps with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Intersects when the query point lies within the area of the circular region in the document.
-
-| Envelope
-| Intersects when the query point lies within the area of the rectangular/bounded box region in the document.
-
-|=== 
-
-=== LineString `Contains` Query
-
-A `contains` query for linestring returns all the matched documents with shapes that contain the linestring within the query. 
-
-A linestring `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "linestring",
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the LineString Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-LineString (GeoShape)
-
-| Point (document shape)
-| NA.   As the  point is a non-closed shape.
-
-| LineString
-| NA.  As a linestring is a non-closed shape.
-
-| Polygon
-| Contains when both the endpoints(start, end) of the linestring in the query are within the area of the polygon in the document.
-
-| MultiPoint
-| NA.  As the multipoint is a non-closed shape.
-
-| MultiLineString
-| NA.  As the multilinestring is a non-closed shape.
-
-| MultiPolygon
-| Contains when both the endpoints(start, end) of the linestring in the query are within the area of any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Matches when both the endpoints(start, end) of the linestring in query overlaps with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Contains when both the endpoints(start, end) of the linestring in the query are within the area of the circular shape in the document.
-
-| Envelope
-| Contains when both the endpoints(start, end) of the linestring in the query are within the area of the rectangle in the document.
-
-|===
-
-=== LineString `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-== Example LineString Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when any of the line endpoints overlap the point in the document. 
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "fields": ["name"],
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "linestring",
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of two (2) hits (from a total of 2 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "linestring",
-          "coordinates": [
-            [
-              1.954764,
-              50.962097
-            ],
-            [
-              3.029578,
-              49.868547
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geojson"
-    },
-    "size": 10,
-    "from": 0,
-    "highlight": null,
-    "fields": [
-      "name"
-    ],
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 0.28065220923315787,
-      "sort": [
-        "Calais Dunkerque"
-      ],
-      "fields": {
-        "name": "Calais Dunkerque"
-      }
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 0.7904517545191571,
-      "sort": [
-        "Peronne St Quentin"
-      ],
-      "fields": {
-        "name": "Peronne St Quentin"
-      }
-    }
-  ],
-  "total_hits": 2,
-  "max_score": 0.7904517545191571,
-  "took": 13592354,
-  "facets": null
-}
-----
-
-== Example LineString Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when the query point lies within the area of the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "fields": ["name"],
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "linestring",
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of three (3) hits (from a total of 3 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "linestring",
-          "coordinates": [
-            [
-              1.954764,
-              50.962097
-            ],
-            [
-              3.029578,
-              49.868547
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 10,
-    "from": 0,
-    "highlight": null,
-    "fields": [
-      "name"
-    ],
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1258",
-      "score": 1.4305136320748595,
-      "sort": [
-        "Bray"
-      ],
-      "fields": {
-        "name": "Bray"
-      }
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 0.20713889888331502,
-      "sort": [
-        "Calais Dunkerque"
-      ],
-      "fields": {
-        "name": "Calais Dunkerque"
-      }
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 2.905133945992968,
-      "sort": [
-        "Peronne St Quentin"
-      ],
-      "fields": {
-        "name": "Peronne St Quentin"
-      }
-    }
-  ],
-  "total_hits": 3,
-  "max_score": 2.905133945992968,
-  "took": 6943298,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-multilinestring.adoc b/modules/fts/pages/fts-queryshape-multilinestring.adoc
deleted file mode 100644
index fa16931f82..0000000000
--- a/modules/fts/pages/fts-queryshape-multilinestring.adoc
+++ /dev/null
@@ -1,331 +0,0 @@
-= MultiLineString Query
-
-[abstract]
-A GeoJSON MultiLineString Query against any GeoJSON type.
-
-== QueryShape for a MultiLineString Query
-
-A GeoJSON query via a GeoShape of MultiLineString to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== MultiLineString `Intersects` Query
-
-An `intersect` query for multilinestring returns all the matched documents with shapes that overlap any of the multiple linestring in the multilinestring array within the query. 
-
-A multilinestring `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiLineString",
-        "coordinates": [
-          [ [1.954764, 50.962097], [3.029578, 49.868547] ],
-          [ [3.029578, 49.868547], [-0.387444, 48.545836] ]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the MultiLineString Query with other indexed GeoJSON shapes in the document set are given below.
-
-Intersection rules for the MultiLineString Query with other indexed GeoJSON shapes are similar to that of the LineString shape mentioned here. 
-The only difference will be that intersection rules are applied on every LineString instance inside the MultiLineString array.
-
-=== MultiLineString `Contains` Query
-
-A `contains` query for multilinestring returns all the matched documents with shapes that contain the multilinestring within the query. 
-
-A multilinestring `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiLineString",      
-        "coordinates": [
-          [ [1.954764, 50.962097], [3.029578, 49.868547] ],
-          [ [3.029578, 49.868547], [-0.387444, 48.545836] ]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the MultiLineString Query with other GeoJSON indexed shapes are similar to that of the LineString shape mentioned earlier. 
-The only difference will be that to qualify a match operation, the containment rules have to be satisfied by every LineString instance inside the MultiLineString array.
-
-=== MultiLineString `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for multilinestring returns all the matched documents with shapes that contain the multilinestring within the query. 
-
-A multilinestring `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiLineString",      
-        "coordinates": [
-          [ [1.954764, 50.962097], [3.029578, 49.868547] ],
-          [ [3.029578, 49.868547], [-0.387444, 48.545836] ]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-== Example MultiLineString Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the multilinestring in the query contains the point in the document including points on the edge or coinciding with the vertices of the multilinestring.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "MultiLineString",      
-        "coordinates": [
-          [ [1.954764, 50.962097], [3.029578, 49.868547] ],
-          [ [3.029578, 49.868547], [-0.387444, 48.545836] ]
-        ]      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of three (3) hits (from a total of 3 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiLineString",
-          "coordinates": [
-            [
-              [
-                1.954764,
-                50.962097
-              ],
-              [
-                3.029578,
-                49.868547
-              ]
-            ],
-            [
-              [
-                3.029578,
-                49.868547
-              ],
-              [
-                -0.387444,
-                48.545836
-              ]
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 0.11785430845172559,
-      "sort": [
-        "Calais Dunkerque"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1257",
-      "score": 0.06113505132837742,
-      "sort": [
-        "Couterne"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 0.33193447914716134,
-      "sort": [
-        "Peronne St Quentin"
-      ]
-    }
-  ],
-  "total_hits": 3,
-  "max_score": 0.33193447914716134,
-  "took": 26684141,
-  "facets": null
-}
-----
-
-== Example MultiLineString Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when the query multilinestring intersects the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "MultiLineString",      
-        "coordinates": [
-          [ [1.954764, 50.962097], [3.029578, 49.868547] ],
-          [ [3.029578, 49.868547], [-0.387444, 48.545836] ]
-        ]      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of three (3) hits (from a total of 3 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiLineString",
-          "coordinates": [
-            [
-              [
-                1.954764,
-                50.962097
-              ],
-              [
-                3.029578,
-                49.868547
-              ]
-            ],
-            [
-              [
-                3.029578,
-                49.868547
-              ],
-              [
-                -0.387444,
-                48.545836
-              ]
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1258",
-      "score": 0.592776664360894,
-      "sort": [
-        "Bray"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 0.08583427853207237,
-      "sort": [
-        "Calais Dunkerque"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 1.7695025992268105,
-      "sort": [
-        "Peronne St Quentin"
-      ]
-    }
-  ],
-  "total_hits": 3,
-  "max_score": 1.7695025992268105,
-  "took": 3894224,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-multipoint.adoc b/modules/fts/pages/fts-queryshape-multipoint.adoc
deleted file mode 100644
index 629e2a39c7..0000000000
--- a/modules/fts/pages/fts-queryshape-multipoint.adoc
+++ /dev/null
@@ -1,396 +0,0 @@
-= MultiPoint Query
-
-[abstract]
-A GeoJSON MultiPoint Query against any GeoJSON type.
-
-== QueryShape for a MultiPoint Query
-
-A GeoJSON query via a GeoShape of MultiPoint to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== MultiPoint `Intersects` Query
-
-An `intersect` query for multipoint returns all the matched documents with shapes that overlap any of the multiple points in the multipoint array within the query. 
-
-A multipoint `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiPoint",
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the MultiPoint Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-MultiPoint (GeoShape)
-
-| Point
-| Intersects when any of the query points overlaps the point in the document. (as point is a non-closed shapes)
-
-| LineString
-| Intersects when any of the query points overlaps with any of the line endpoints in the document.(as linestring is a non-closed shapes)
-
-| Polygon
-| Intersects when any of the query points lies within the area of the polygon.
-
-| MultiPoint
-| Intersects when any of the query points overlaps with any of the many points in the multipoint array in the document. 
-
-| MultiLineString
-| Intersects when any of the query points overlaps with any of the linestring endpoints in the multilinestring array in the document. 
-
-| MultiPolygon
-| Intersects when any of the query points lies within the area of any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Intersects when any of the query points overlaps with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Intersects when any of the query points lies within the area of the circular region in the document.
-
-| Envelope
-| Intersects when any of the query points lies within the area of the rectangular/bounded box region in the document.
-
-|=== 
-
-=== MultiPoint `Contains` Query
-
-A `contains` query for multipoint returns all the matched documents with shapes that contain the multipoint within the query. 
-
-A multipoint `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiPoint",      
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the MultiPoint Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-MultiPoint (GeoShape)
-
-| Point
-| NA.  A point can’t contain a multipoint.
-
-| LineString
-| NA. A linestring can’t contain a multipoint.
-
-| Polygon
-| Contains when all of the query points in the multipoint array  lie within the area of the polygon.
-
-| MultiPoint
-| Contains when all of the query points in the multipoint array overlap with any of the many points in the multipoint array in the document. 
-
-| MultiLineString
-| NA. A multi linestring can’t contain a multipoint.
-
-| MultiPolygon
-| Contains when all of the query points in the multipoint array lie within the area of any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Contains when all of the query points in the multipoint array overlap with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Contains when all of the query points in the multipoint array lie within the area of the circular region in the document.
-
-| Envelope
-| Contains when all of the query points in the multipoint array lie within the area of the rectangular/bounded box region in the document.
-
-|===
-
-=== MultiPoint `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for multipoint returns all the matched documents with shapes that contain the multipoint within the query. 
-
-A multipoint `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiPoint",      
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-WithIn rules for the MultiPoint Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-MultiPoint (GeoShape)
-
-| Point
-| Matches when any of the query points in the multipoint array overlap with the geo points in the document.
-
-| LineString
-| NA.  
-
-| Polygon
-| NA
-
-| MultiPoint
-| Matches when all of the query points in the multipoint array overlap with any of the many points in the multipoint array in the document. 
-
-| MultiLineString
-| NA
-
-| MultiPolygon
-| NA
-
-| GeometryCollection
-| NA
-
-| Circle
-| NA
-
-| Envelope
-| NA
-
-|===
-
-== Example MultiPoint Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when any of the query points in the multipoint array overlap with the geo points in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "MultiPoint",      
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "within"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of two (2) hits (from a total of 2 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiPoint",
-          "coordinates": [
-            [
-              1.954764,
-              50.962097
-            ],
-            [
-              3.029578,
-              49.868547
-            ]
-          ]
-        },
-        "relation": "within"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 3.5287254429876733,
-      "sort": [
-        "Calais Dunkerque"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 3.5326647348568896,
-      "sort": [
-        "Peronne St Quentin"
-      ]
-    }
-  ],
-  "total_hits": 2,
-  "max_score": 3.5326647348568896,
-  "took": 10149092,
-  "facets": null
-}
-----
-
-== Example MultiPoint Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when any of the query points lies within the area of the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "MultiPoint",      
-        "coordinates": [
-          [1.954764, 50.962097],
-          [3.029578, 49.868547]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of two (2) hits (from a total of 2 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiPoint",
-          "coordinates": [
-            [
-              1.954764,
-              50.962097
-            ],
-            [
-              3.029578,
-              49.868547
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 0.490187283157727,
-      "sort": [
-        "Calais Dunkerque"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1255",
-      "score": 0.7869533596268505,
-      "sort": [
-        "Peronne St Quentin"
-      ]
-    }
-  ],
-  "total_hits": 2,
-  "max_score": 0.7869533596268505,
-  "took": 7023893,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-multipolygon.adoc b/modules/fts/pages/fts-queryshape-multipolygon.adoc
deleted file mode 100644
index 2f47e97cd2..0000000000
--- a/modules/fts/pages/fts-queryshape-multipolygon.adoc
+++ /dev/null
@@ -1,514 +0,0 @@
-= MultiPolygon Query
-
-[abstract]
-A GeoJSON MultiPolygon Query against any GeoJSON type.
-
-== QueryShape for a MultiPolygon Query
-
-A GeoJSON query via a GeoShape of MultiPolygon to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== MultiPolygon `Intersects` Query
-
-An `intersect` query for multipolygon returns all the matched documents with shapes that overlap with the area of any of the polygons within the query. 
-
-A multipolygon `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "MultiPolygon",      
-        "coordinates": [
-          [
-            [[-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]]
-          ],
-          [
-            [[-109.05029296875,37.00255267215955],
-            [-102.041015625,37.00255267215955],
-            [-102.041015625,40.9964840143779],
-            [-109.05029296875,40.9964840143779],
-            [-109.05029296875,37.00255267215955]]
-          ]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-The intersection rules are similar to that of the Polygon Query shape described earlier.  
-The rules will be applied to every indexed GeoJSON Polygon shape in the MultiPolygon array. 
-If any of the query polygons intersect, then it will be a matching document.
-
-=== MultiPolygon `Contains` Query
-
-A `contains` query for multipolygon returns all the matched documents with shape(s) that collectively contain the area of every polygon within the query. 
-
-A multipolygon `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [[-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]]
-          ],
-          [
-            [[-109.05029296875,37.00255267215955],
-            [-102.041015625,37.00255267215955],
-            [-102.041015625,40.9964840143779],
-            [-109.05029296875,40.9964840143779],
-            [-109.05029296875,37.00255267215955]]
-          ]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-The containment rules are similar to that of the Polygon Query shape described earlier.  
-The rules will be applied to every indexed GeoJSON Polygon shape in the MultiPolygon array. 
-If every query polygon is contained within any of the indexed shapes in the document, then it will be considered as a matching document.
-
-=== MultiPolygon `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for multipolygon returns all the matched documents with shape(s) that are residing within the area of any of the polygons within the query. 
-
-A multipolygon `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [[-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]]
-          ],
-          [
-            [[-109.05029296875,37.00255267215955],
-            [-102.041015625,37.00255267215955],
-            [-102.041015625,40.9964840143779],
-            [-109.05029296875,40.9964840143779],
-            [-109.05029296875,37.00255267215955]]
-          ]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-WithIn rules for the MultiPolygon Query with other indexed GeoJSON shapes in the document set are given below.
-
-The within rules are similar to that of the Polygon Query shape described earlier.  
-The rules will be applied to every indexed GeoJSON Polygon shape in the MultiPolygon array. 
-If all the polygons in the query collectively contain/cover all of the shapes in the documents, then it will be considered as a matching document.
-
-== Example MultiPolygon Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the multipolygon in the query contains the point in the document including points on the edge or coinciding with the vertices of the multipolygon.
-
-The MultiPolygon contains a two polygons one for Utah and one for Colorado. The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "MultiPolygon",      
-        "coordinates": [
-          [
-            [[-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]]
-          ],
-          [
-            [[-109.05029296875,37.00255267215955],
-            [-102.041015625,37.00255267215955],
-            [-102.041015625,40.9964840143779],
-            [-109.05029296875,40.9964840143779],
-            [-109.05029296875,37.00255267215955]]
-          ]
-        ]
-      },
-      "relation": "within"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 45 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiPolygon",
-          "coordinates": [
-            [
-              [
-                [
-                  -114.027099609375,
-                  42.00848901572399
-                ],
-                [
-                  -114.04907226562499,
-                  36.99377838872517
-                ],
-                [
-                  -109.05029296875,
-                  36.99377838872517
-                ],
-                [
-                  -109.05029296875,
-                  40.98819156349393
-                ],
-                [
-                  -111.060791015625,
-                  40.98819156349393
-                ],
-                [
-                  -111.02783203125,
-                  42.00848901572399
-                ],
-                [
-                  -114.027099609375,
-                  42.00848901572399
-                ]
-              ]
-            ],
-            [
-              [
-                [
-                  -109.05029296875,
-                  37.00255267215955
-                ],
-                [
-                  -102.041015625,
-                  37.00255267215955
-                ],
-                [
-                  -102.041015625,
-                  40.9964840143779
-                ],
-                [
-                  -109.05029296875,
-                  40.9964840143779
-                ],
-                [
-                  -109.05029296875,
-                  37.00255267215955
-                ]
-              ]
-            ]
-          ]
-        },
-        "relation": "within"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7001",
-      "score": 0.15727687392401135,
-      "sort": [
-        "Aspen Pitkin County Sardy Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_8854",
-      "score": 0.07715884020494193,
-      "sort": [
-        "Boulder Municipal"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.0741364322553217,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.15503416574594084,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_3567",
-      "score": 0.07715884020494193,
-      "sort": [
-        "Buckley Afb"
-      ]
-    }
-  ],
-  "total_hits": 45,
-  "max_score": 0.28539049531242594,
-  "took": 10460443,
-  "facets": null
-}
-
-----
-
-== Example MultiPolygon Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-The MultiPolygon contains a two polygons one for Utah and one for Colorado. Intersects when the query multipolygon intersects the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "MultiPolygon",      
-        "coordinates": [
-          [
-            [[-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]]
-          ],
-          [
-            [[-109.05029296875,37.00255267215955],
-            [-102.041015625,37.00255267215955],
-            [-102.041015625,40.9964840143779],
-            [-109.05029296875,40.9964840143779],
-            [-109.05029296875,37.00255267215955]]
-          ]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 49 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "MultiPolygon",
-          "coordinates": [
-            [
-              [
-                [
-                  -114.027099609375,
-                  42.00848901572399
-                ],
-                [
-                  -114.04907226562499,
-                  36.99377838872517
-                ],
-                [
-                  -109.05029296875,
-                  36.99377838872517
-                ],
-                [
-                  -109.05029296875,
-                  40.98819156349393
-                ],
-                [
-                  -111.060791015625,
-                  40.98819156349393
-                ],
-                [
-                  -111.02783203125,
-                  42.00848901572399
-                ],
-                [
-                  -114.027099609375,
-                  42.00848901572399
-                ]
-              ]
-            ],
-            [
-              [
-                [
-                  -109.05029296875,
-                  37.00255267215955
-                ],
-                [
-                  -102.041015625,
-                  37.00255267215955
-                ],
-                [
-                  -102.041015625,
-                  40.9964840143779
-                ],
-                [
-                  -109.05029296875,
-                  40.9964840143779
-                ],
-                [
-                  -109.05029296875,
-                  37.00255267215955
-                ]
-              ]
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7001",
-      "score": 0.10519759431791387,
-      "sort": [
-        "Aspen Pitkin County Sardy Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_8854",
-      "score": 0.050596784242215975,
-      "sort": [
-        "Boulder Municipal"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.04283511574155623,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.23115574489506296,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_3567",
-      "score": 0.047931898270349875,
-      "sort": [
-        "Buckley Afb"
-      ]
-    }
-  ],
-  "total_hits": 49,
-  "max_score": 0.412412891553119,
-  "took": 11706695,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-point.adoc b/modules/fts/pages/fts-queryshape-point.adoc
deleted file mode 100644
index 0710affa78..0000000000
--- a/modules/fts/pages/fts-queryshape-point.adoc
+++ /dev/null
@@ -1,297 +0,0 @@
-= Point Query
-
-[abstract]
-A GeoJSON Point Query against any GeoJSON type.
-
-== QueryShape for a Point Query
-
-A GeoJSON query via a GeoShape of Point to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== Point `Intersects` Query
-
-A point `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "point",
-        "coordinates": [1.954764, 50.962097]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the Point Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-Point (GeoShape)
-
-| Point
-| Matches when the query point overlaps the point in the document (as point is a non-closed shapes).
-
-| LineString
-| Matches when the query point overlaps any of the line endpoints in the document (as linestring is a non-closed shapes).
-
-| Polygon
-| Matches when the query point lies within the area of the polygon.
-
-| MultiPoint
-| Matches when the query point overlaps with any of the many points in the multipoint array in the document.
-
-| MultiLineString
-| Matches when the query point overlaps with any of the linestring endpoints in the multilinestring array in the document.
-
-| MultiPolygon
-| Matches when the query point lies within the area of any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Matches when the query point overlaps with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Matches when the query point lies within the area of the circular region in the document.
-
-| Envelope
-| Matches when the query point lies within the area of the rectangular/bounded box region in the document.
-
-|=== 
-
-=== Point `Contains` Query
-
-As the point is a non-closed/single spot there is no difference between `intersects` and `contains` query for this GeoShape.
-The guiding rules for the `contains` relation are exactly similar to that `intersects`.
-
-A point `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "point",
-        "coordinates": [1.954764, 50.962097]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
- 
-== Point `WithIn` Query
-
-As the point is a non-closed shape, it is not possible for it to contain any larger shapes other than just the point itself.
-
-A point `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": " << fieldName >> ",
-    "geometry": {
-      "shape": {
-        "type": "point",
-        "coordinates": [1.954764, 50.962097]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
- 
-WithIn rules for the Point Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| WithIn (relation) +
-Document Shape|{nbsp} +
-Point (GeoShape)
-
-| Point (document shape)
-| Matches when the query point is exactly the same point in the document.
-
-|===
-
-== Example Point Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the query point overlaps the point in the document (as point is a non-closed shapes).
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "fields": ["name"],
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "point",
-        "coordinates": [1.954764, 50.962097]
-      },
-      "relation": "contains"
-    }
-  },
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of one (1) hit (from a total of 1 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "point",
-          "coordinates": [
-            1.954764,
-            50.962097
-          ]
-        },
-        "relation": "contains"
-      },
-      "field": "geojson"
-    },
-    "size": 10,
-    "from": 0,
-    "highlight": null,
-    "fields": [
-      "name"
-    ],
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 9.234442801897503,
-      "sort": [
-        "Calais Dunkerque"
-      ],
-      "fields": {
-        "name": "Calais Dunkerque"
-      }
-    }
-  ],
-  "total_hits": 1,
-  "max_score": 9.234442801897503,
-  "took": 10557459,
-  "facets": null
-}
-----
-
-== Example Point Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the query point lies within the area of the circular region in the document.
-
-The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "fields": ["name"],
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "point",
-        "coordinates": [1.954764, 50.962097]
-      },
-      "relation": "intersects"
-    }
-  },
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of one (1) hit (from a total of 1 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "point",
-          "coordinates": [
-            1.954764,
-            50.962097
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 10,
-    "from": 0,
-    "highlight": null,
-    "fields": [
-      "name"
-    ],
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_1254",
-      "score": 1.2793982619305806,
-      "sort": [
-        "Calais Dunkerque"
-      ],
-      "fields": {
-        "name": "Calais Dunkerque"
-      }
-    }
-  ],
-  "total_hits": 1,
-  "max_score": 1.2793982619305806,
-  "took": 6334489,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-queryshape-polygon.adoc b/modules/fts/pages/fts-queryshape-polygon.adoc
deleted file mode 100644
index 704d9ac0c2..0000000000
--- a/modules/fts/pages/fts-queryshape-polygon.adoc
+++ /dev/null
@@ -1,523 +0,0 @@
-= Polygon Query
-
-[abstract]
-A GeoJSON Polygon Query against any GeoJSON type.
-
-== QueryShape for a Polygon Query
-
-A GeoJSON query via a GeoShape of Polygon to find GeoJSON types in a Search index using the 3 relations intersects, contains, and within.
-
-=== Polygon `Intersects` Query
-
-An `intersect` query for polygon returns all the matched documents with shapes that overlap with the area of the polygon within the query. 
-
-A polygon `intersection` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-Intersection rules for the Polygon Query with other indexed GeoJSON shapes in the document set are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Intersects (relation) +
-Document Shape|{nbsp} +
-Polygon (GeoShape)
-
-| Point
-| Intersects when the polygon area contains the point in the document.
-
-| LineString
-| Intersects when one of the polygon edges in the query intersects the linestring in the document.
-
-| Polygon
-| Intersects when the polygon area in the query intersects the polygon in the document.
-
-| MultiPoint
-| Intersects when the polygon area contains any of the points in the multipoint array in the document.
-
-| MultiLineString
-| Intersects when the polygon area in the query intersects any of the linestring in the multilinestring array in the document.
-
-| MultiPolygon
-| Intersects when the polygon in the query intersects any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Matches when the query polygon intersects with any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Intersects when the query polygon intersects the circular region in the document.
-
-| Envelope
-| Intersects when the query polygon intersects the area of the rectangular/bounded box region in the document.
-
-|=== 
-
-=== Polygon `Contains` Query
-
-A `contains` query for polygon returns all the matched documents with shapes that contain the polygon within the query. 
-
-A polygon `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ]
-      },
-      "relation": "contains"
-    }
-  }
-}
-----
-
-Containment rules for the polygon query with other indexed shapes are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Polygon (GeoShape)
-
-| Point
-| NA.  Point is a non-closed shape.
-
-| LineString
-| NA.  Linestring is a non-closed shape.
-
-| Polygon
-| Contains when the polygon in the query resides completely within the polygon in the document.
-
-| MultiPoint
-| NA.  MultiPoint is a non-closed shape.
-
-| MultiLineString
-| NA.  MultiLineString is a non-closed shape.
-
-| MultiPolygon
-| Contains when the polygon in the query resides completely within any of the polygons in the multipolygon array in the document.
-
-| GeometryCollection
-| Matches when the query polygon is contained within any of the heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Contains when the query polygon resides completely within the circular region in the document.
-
-| Envelope
-| Contains when the query polygon resides completely within the rectangular/bounded box region in the document.
-
-|===
-
-=== Polygon `WithIn` Query
-
-The Within query is not supported by line geometries.
-
-A `within` query for polygon returns all the matched documents with shapes that contain the polygon within the query. 
-
-A polygon `contains` query sample is given below.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-WithIn rules for the polygon query with other indexed shapes are given below.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Contains (relation) +
-Document Shape|{nbsp} +
-Polygon (GeoShape)
-
-| Point
-| Matches when the polygon in the query contains the point in the document including points on the edge or coinciding with the vertices of the polygon.
-
-| LineString
-| Matches when the polygon in the query contains both the endpoints of the linestring in the document.
-
-| Polygon
-| Matches when the polygon in the query contains the polygon in the document completely.
-
-| MultiPoint
-| Matches when the polygon in the query contains every point in the multipoint array in the document.
-
-| MultiLineString
-| Matches when the polygon in the query contains every linestring in the multilinestring array in the document.
-
-| MultiPolygon
-| Matches when the polygon in the query contains every polygon in the multipolygon array in the document completely.
-
-| GeometryCollection
-| Matches when the query polygon contains every heterogeneous (above 6) shapes in the geometrycollection array in the document.
-
-| Circle
-| Matches when the polygon in the query contains the circle in the document completely.
-
-| Envelope
-| Matches when the polygon in the query contains the rectangle/envelope in the document completely.
-
-|===
-
-== Example Polygon Query (against Points)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Matches when the polygon in the query contains the point in the document including points on the edge or coinciding with the vertices of the polygon.
-
-The Polygon below is Utah. The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geojson",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ]
-      },
-      "relation": "within"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 18 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "Polygon",
-          "coordinates": [
-            [
-              [
-                -114.027099609375,
-                42.00848901572399
-              ],
-              [
-                -114.04907226562499,
-                36.99377838872517
-              ],
-              [
-                -109.05029296875,
-                36.99377838872517
-              ],
-              [
-                -109.05029296875,
-                40.98819156349393
-              ],
-              [
-                -111.060791015625,
-                40.98819156349393
-              ],
-              [
-                -111.02783203125,
-                42.00848901572399
-              ],
-              [
-                -114.027099609375,
-                42.00848901572399
-              ]
-            ]
-          ]
-        },
-        "relation": "within"
-      },
-      "field": "geojson"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.13231342774148913,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.27669394470240527,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7074",
-      "score": 0.13231342774148913,
-      "sort": [
-        "Canyonlands Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7583",
-      "score": 0.13231342774148913,
-      "sort": [
-        "Carbon County Regional-Buck Davis Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_3824",
-      "score": 0.24860341896785076,
-      "sort": [
-        "Cedar City Rgnl"
-      ]
-    }
-  ],
-  "total_hits": 18,
-  "max_score": 0.27669394470240527,
-  "took": 16364364,
-  "facets": null
-}
-----
-
-== Example Polygon Query (against Circles)
-
-include::partial$fts-geoshape-prereq-common.adoc[]
-
-Intersects when the query polygon intersects the circular region in the document.
-
-The Polygon below is Utah. The results are specified to be sorted on `name`. Note type hotel and landmark have a name field and type airport has an airportname field all these values are analyzed as a keyword (exposed as `name`).
-
-[source, command]
-----
-curl -s -XPOST -H "Content-Type: application/json" \
--u ${CB_USERNAME}:${CB_PASSWORD} http://${CB_HOSTNAME}:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "field": "geoarea",
-    "geometry": {
-      "shape": {
-        "type": "Polygon",      
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ]
-      },
-      "relation": "intersects"
-    }
-  },
-  "size": 5,
-  "from": 0,
-  "sort": ["name"]
-}' |  jq .
-----
-
-The output of five (5) hits (from a total of 20 matching docs) is as follows
-
-[source, json]
-----
-{
-  "status": {
-    "total": 1,
-    "failed": 0,
-    "successful": 1
-  },
-  "request": {
-    "query": {
-      "geometry": {
-        "shape": {
-          "type": "Polygon",
-          "coordinates": [
-            [
-              [
-                -114.027099609375,
-                42.00848901572399
-              ],
-              [
-                -114.04907226562499,
-                36.99377838872517
-              ],
-              [
-                -109.05029296875,
-                36.99377838872517
-              ],
-              [
-                -109.05029296875,
-                40.98819156349393
-              ],
-              [
-                -111.060791015625,
-                40.98819156349393
-              ],
-              [
-                -111.02783203125,
-                42.00848901572399
-              ],
-              [
-                -114.027099609375,
-                42.00848901572399
-              ]
-            ]
-          ]
-        },
-        "relation": "intersects"
-      },
-      "field": "geoarea"
-    },
-    "size": 5,
-    "from": 0,
-    "highlight": null,
-    "fields": null,
-    "facets": null,
-    "explain": false,
-    "sort": [
-      "name"
-    ],
-    "includeLocations": false,
-    "search_after": null,
-    "search_before": null
-  },
-  "hits": [
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_6999",
-      "score": 0.07521314153068777,
-      "sort": [
-        "Brigham City"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7857",
-      "score": 0.2608486787753336,
-      "sort": [
-        "Bryce Canyon"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7074",
-      "score": 0.08184801789845488,
-      "sort": [
-        "Canyonlands Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_7583",
-      "score": 0.08652876583277351,
-      "sort": [
-        "Carbon County Regional-Buck Davis Field"
-      ]
-    },
-    {
-      "index": "test_geojson_3397081757afba65_4c1c5584",
-      "id": "airport_3824",
-      "score": 0.4282420802218974,
-      "sort": [
-        "Cedar City Rgnl"
-      ]
-    }
-  ],
-  "total_hits": 20,
-  "max_score": 0.5252881608935254,
-  "took": 12509460,
-  "facets": null
-}
-----
diff --git a/modules/fts/pages/fts-quickstart-guide.adoc b/modules/fts/pages/fts-quickstart-guide.adoc
deleted file mode 100644
index 1661037aee..0000000000
--- a/modules/fts/pages/fts-quickstart-guide.adoc
+++ /dev/null
@@ -1,48 +0,0 @@
-= Search Service Quick Start Guide
-:description: Following appropriate preparations, full text searches can be performed in a number of ways.
-:page-aliases: fts:fts-performing-searches.adoc#preparing-for-full-text-searches
-
-[abstract]
-{description}
-
-[#preparing-for-full-text-searches]
-
-include::partial$fts-user-prerequisites-common.adoc[]
-
-== Quick Start via the Classic Editor
-
-include::partial$fts-creating-indexes-common.adoc[]
-
-For a more detailed explanation of the available Query options, refer to xref:fts-searching-from-the-UI.adoc[Searching from the UI]
-
-NOTE: During index creation, in support of most query-types, you can select (or create) and use an _analyzer_.
-This is optional: if you do not specify an analyzer, a default analyzer is provided.
-Analyzers can be created by means of the Couchbase Web Console, during index creation, as described in xref:fts-creating-indexes.adoc[Creating Search Indexes].
-Their functionality and inner components are described in detail in xref:fts-analyzers.adoc[Understanding Analyzers].
-
-[#performing-full-text-searches]
-== Methods to Access the Search service
-
-Search queries (Full Text, Geospatial, Numeric, and other) can be performed with:
-
-* The Couchbase Web Console.
-This UI can also be used to create indexes and analyzers.
-Refer to xref:fts-searching-from-the-UI.adoc[Searching from the UI] for information.
-* The Couchbase REST API.
-Refer to xref:fts-searching-with-curl-http-requests.adoc#Searching-with-the-REST-API-(cURL/HTTP)[Searching with the REST API] for information.
-Refer also to xref:rest-api:rest-fts.adoc[Search API] for REST reference details.
-* The Couchbase SDK.
-This supports several languages, and allows Search queries to be performed with each.
-Refer to the SDK's xref:java-sdk:concept-docs:full-text-search-overview.adoc[Java Search Overview] page for information.
-Note that the xref:java-sdk:howtos:full-text-searching-with-sdk.adoc[Searching from the Java SDK] page for the _Java_ SDK provides an extensive code-example that demonstrates multiple options for performing searches.
-//(Refer to <> below for more information.)
-* The {sqlpp} Search functions.
-These enable you to perform a full text search as part of a {sqlpp} query.
-Refer to xref:n1ql:n1ql-language-reference/searchfun.adoc[Search Functions] for information.
-
-[#establishing-demonstration-indexes]
-== Accessing the Search service via the Java SDK
-
-The Java SDK code-example provided in xref:java-sdk:howtos:full-text-searching-with-sdk.adoc[Searching from the Java SDK] contains multiple demonstration calls — each featuring a different query-combination — and makes use of three different index-definitions, related to the `travel-sample` bucket: for the code example to run successfully, the three indexes must be appropriately pre-established.
-//The definitions are provided in xref:fts-demonstration-indexes.adoc[Demonstration Indexes].
-Instructions on how to use the Couchbase REST API to establish the definitions refer to xref:fts-creating-index-with-rest-api.adoc[Index Creation with REST API].
diff --git a/modules/fts/pages/fts-search-request.adoc b/modules/fts/pages/fts-search-request.adoc
deleted file mode 100644
index 983f2e5fc7..0000000000
--- a/modules/fts/pages/fts-search-request.adoc
+++ /dev/null
@@ -1,983 +0,0 @@
-= Search Request
-:page-aliases: fts-queries.adoc , fts-consistency.adoc
-
-[#Query]
-== Query
-
-Search allows multiple query types to be performed on Full Text Indexes. Each of these query types helps enhance the search and retrievability of the indexed data.
-
-These capabilities include:
-
-* Input-text and target-text can be analyzed: this transforms input-text into token-streams, according to different specified criteria, allowing richer and more finely controlled forms of text-matching.
-* The fuzziness of a query can be specified, so that the scope of matches can be constrained to a particular level of exactitude. A high degree of fuzziness means that a large number of partial matches may be returned.
-* Multiple queries can be specified for simultaneous processing, with one given a higher boost than another, ensuring that its results are returned at the top of the set.
-* Regular expressions and wildcards can be used in text-specification for search-input.
-* Compound queries can be designed such that an appropriate conjunction or disjunction of the total result-set can be returned.
-* Geospatial queries can be used for finding the nearest neighbor or points of interest in a bounded region.
-
-All the above options, and others, are explained in detail in xref:fts-supported-queries.adoc[Supported Queries]
-
-[#Consistency]
-== Consistency
-
-A mechanism to ensure that the Full Text Search (FTS) index can obtain the most up-to-date version of the document written to a collection or a bucket. 
-
-The consistency mechanism provides xref:#consistency-vectors[Consistency Vectors] as objects in the search query that ensures FTS index searches all your last data written to the vBucket. 
-
-The search service does not respond to the query until the designated vBucket receives the correct sequence number. 
-
-The search query remains blocked while continuously polling the vBucket for the requested data. Once the sequence number of the data is obtained, the query is executed over the data written to the vBucket.
-
-When using this consistency mode, the query service will ensure that the indexes are synchronized with the data service before querying.
-
-=== Workflow to understand Consistency
-
-. Create an FTS index in Couchbase.
-. Write a document to the Couchbase cluster. 
-. Couchbase returns the associate vector to the app, which needs to issue a query request with the vector.
-. The FTS index starts searching the data written to the vBucket.
-
-In this workflow, it is possible that the document written to the vBucket is not yet indexed. So, when FTS starts searching that document, the most up-to-date document versions are not retrieved, and only the indexed versions are queried.
-
-Therefore, the Couchbase server provides a consistency mechanism to overcome this issue and ensures that the FTS index can search the most up-to-date document written to the vBucket.
-
-=== Consistency Level
-
-The consistency level is a parameter that either takes an empty string indicating the unbounded (not_bounded) consistency or at_plus indicating the bounded consistency.
-
-==== at_plus
-
-Executes the query, requiring indexes first to be updated to the timestamp of the last update. 
-
-This implements bounded consistency. The request includes a scan_vector parameter and value, which is used as a lower bound. This can be used to implement read-your-own-writes (RYOW).
-
-If index-maintenance is running behind, the query waits for it to catch up.
-
-==== not_bounded
-
-Executes the query immediately, without requiring any consistency for the query. No timestamp vector is used in the index scan. 
-
-This is the fastest mode, because it avoids the costs of obtaining the vector and waiting for the index to catch up to the vector.
-
-If index-maintenance is running behind, out-of-date results may be returned.
-
-[#consistency-vectors]
-=== Consistency Vectors
-
-The consistency vectors supporting the consistency mechanism in Couchbase contain the mapping of the vBucket and sequence number of the data stored in the vBucket.
-
-For more information about consistency mechanism, see xref:fts-consistency.adoc[Consistency]
-
-==== Example
-[source, JSON]
-----
-{
-  "ctl": {
-    "timeout": 10000,
-    "consistency": {
-      "vectors": {
-        "index1": {
-          "607/205096593892159": 2,
-          "640/298739127912798": 4
-        }
-      },
-      "level": "at_plus"
-    }
-  },
-  "query": {
-    "match": "jack",
-    "field": "name"
-  }
-}
-----
-
-In the example above, this is the set of consistency vectors.
-
-----
-"index1": {
-  "607/205096593892159": 2,
-  "640/298739127912798": 4
-}
-----
-
-The query is looking within the FTS index "index1" - for:
-
-* vbucket 607 (with UUID 205096593892159) to contain sequence number 2
-* vbucket 640 (with UUID 298739127912798) to contain sequence number 4
-
-=== Consistency Timeout
-
-It is the amount of time (in milliseconds) the search service will allow for a query to execute at an index partition level. 
-
-If the query execution surpasses this `timeout` value, the query is canceled. However, at this point if some of the index partitions have responded, you might see partial results, otherwise no results at all.
-
-[source, JSON]
-----
-{
-  "ctl": {
-    "timeout": 10000,
-    "consistency": {
-      "vectors": {
-        "index1": {
-          "607/205096593892159": 2,
-          "640/298739127912798": 4
-        }
-      },
-      "level": "at_plus"
-    }
-  },
-  "query": {
-    "match": "jack",
-    "field": "name"
-  }
-}
-----
-
-=== Consistency Results
-
-Consistency result is the attribute that you can use to set the query result option, such as complete.
-
-==== Example:
-[source, JSON]
-----
-{
-  "query": {...}, 
-  "ctl": {
-    "consistency": {
-      "results": "complete"
-    }
-  }
-} 
-----
-
-=== The "Complete" option
-
-The complete option allows you to set the query result as "complete" which indicates that if any of the index partitions are unavailable due to the node not being reachable, the query will display an error in response instead of partial results.
-    
-==== Example
-[source, JSON]
-----
-{
-  "query": {...}, 
-  "ctl": {
-    "consistency": {
-      "results": "complete"
-    }
-  }
-}
-----
-
-
-=== Consistency Tips and Recommendations
-
-Consistency vectors provide "read your own writes" functionality where the read operation waits for a specific time until the write operation is finished.
-
-When users know that their queries are complex which require more time in completing the write operations, they can set the timeout value higher than the default timeout of 10 seconds so that consistency can be obtained in the search operations. 
-
-However, if this consistency is not required, the users can optimize their search operations by using the default timeout of 10 seconds.
-
-==== Example
-
-[source, JSON]
-----
-{
-
-  "ctl": {
-    "timeout": 10000,
-    "consistency": {
-      "vectors": {
-        "index1": {
-          "607/205096593892159": 2,
-          "640/298739127912798": 4
-        }
-      },
-      "level": "at_plus"
-    }
-  },
-  "query": {
-    "match": "airport",
-    "field": "type"
-  }
-}
-----
-
-[#Sizes-From-Pages]
-== Size/From/Pages
-
-The number of results obtained for a Full Text Search request can be large. Pagination of these results becomes essential for sorting and displaying a subset of these results.
-
-There are multiple ways to achieve pagination with settings within a search request. Pagination will fetch a deterministic set of results when the results are sorted in a certain fashion.
-
-Pagination provides the following options: 
-
-=== Size/from or offset/limit
-
-This pagination settings can be used to obtain a subset of results and works deterministically when combined with a certain sort order.
-
-Using `size/limit` and `offset/from` would fetch at least `size + from` ordered results from a partition and then return the `size` number of results starting at offset `from`.
-
-Deep pagination can therefore get pretty expensive when using `size + from` on a sharded index due to each shard having to possibly return large resultsets (at least `size + from`) over the network for merging at the coordinating node before returning the `size` number of results starting at offset `from`.
-
-The default sort order is based on _score_ (relevance) where the results are ordered from the highest to the lowest score.
-
-==== Example
-
-Here's an example query that fetches results from the 11th onwards to the 15th that have been ordered by _score_.
-
-[source, json]
-----
-{
-  "query": {
-      "match": "California",
-      "field": "state"
-  },
-  "size": 5,
-  "from": 10
-}
-----
-
-== Search after/before
-
-For an efficient pagination, you can use the `search_after/search_before` settings.
-
-`search_after` is designed to fetch the `size` number of results after the key specified and `search_before` is designed to fetch the `size` number of results before the key specified.
-
-These settings allow for the client to maintain state while paginating - the sort key of the last result (for search_after) or the first result (for search_before) in the current page.
-
-Both the attributes accept an array of strings (sort keys) - the length of this array will need to be the same length of the "sort" array within the search request.
-
-NOTE: You cannot use both `search_after` and `search_before` in the same search request.
-
-=== Example
-
-Here are some examples using `search_after/search_before` over sort key "_id" (an internal field that carries the document ID).
-
-[source, json]
-----
-{
-  "query": {
-      "match": "California",
-      "field": "state"
-  },
-  "sort": ["_id"],
-  "search_after": ["hotel_10180"],
-  "size": 3
-}
-----
-
-[source, json]
-----
-{
-  "query": {
-      "match": "California",
-      "field": "state"
-  },
-  "sort": ["_id"],
-  "search_before": ["hotel_17595"],
-  "size": 4
-}
-----
-
-NOTE: A Full Text Search request that doesn't carry any pagination settings will return the first 10 results (`"size: 10", "from": 0`) ordered by _score_ sequentially from the highest to lowest.
-
-=== Pagination tips and recommendations
-
-The pagination of search results can be done using the `from` and `size` parameters in the search request. But as the search gets into deeper pages, it starts consuming more resources. 
-
-To safeguard against any arbitrary higher memory requirements, FTS provides a configurable limit bleveMaxResultWindow (10000 default) on the maximum allowable page offsets. However, bumping this limit to higher levels is not a scalable solution.
-
-To circumvent this problem, the concept of key set pagination in FTS, is introduced. 
-
-Instead of providing `from` as a number of search results to skip, the user will provide the sort value of a previously seen search result (usually, the last result shown on the current page).  The idea is that to show the next page of the results, we just want the top N results of that sort after the last result from the previous page.
-
-This solution requires a few preconditions be met:
-
-* The search request must specify a sort order.
-
-NOTE: The sort order must impose a total order on the results.  Without this, any results which share the same sort value might be left out when handling the page navigation boundaries.  
-
-A common solution to this is to always include the document ID as the final sort criteria.                                       
-
-For example, if you want to sort by [“name”, “-age”], instead of sort by [“name”, “-age”, "_id"].
-
-With `search_after`/`search_before` paginations, the heap memory requirement of deeper page searches is made proportional to the requested page size alone. So it reduces the heap memory requirement of deeper page searches significantly down from the offset+from values.
-_._
-
-[#Sorting]
-== Sorting
-
-The FTS results are returned as objects. FTS query includes options to order the results.
-
-=== Sorting Result Data
-
-FTS sorting is sorted by descending order of relevance. It can, however, be customized to sort by different fields, depending on the application. 
-
-On query-completion, _sorting_ allows specified members of the result-set to be displayed prior to others: this facilitates a review of the most significant data.
-
-Within a JSON query object, the required sort-type is specified by using the `sort` field.
-
-This takes an array of either _strings_, _objects_, or _numeric_ as its value.
-
-=== Sorting with Strings
-
-You can specify the value of the `sort` field as an array of strings.
-These can be of three types:
-
-* _field name_: Specifies the name of a field.
-+
-If multiple fields are included in the array, the sorting of documents begins according to their values for the field whose name is first in the array.
-+
-If any number of these values are identical, their documents are sorted again, this time according to their values for the field whose name is second; then, if any number of these values are identical, their documents are sorted a third time, this time according to their values for the field whose name is third; and so on.
-+
-Any document-field may be specified to hold the value on which sorting is to be based, provided that the field has been indexed in some way, whether dynamically or specifically.
-+
-The default sort-order is _ascending_.
-If a field-name is prefixed with the `-` character, that field's results are sorted in _descending_ order.
-
-* `_id`:Refers to the document identifier.
-Whenever encountered in the array, causes sorting to occur by document identifer.
-
-* `_score`: Refers to the score assigned the document in the result-set.
-Whenever encountered in the array, causes sorting to occur by score.
-
-==== Example
-
-----
-"sort": ["country", "state", "city","-_score"]
-----
-
-This `sort` statement specifies that results will first be sorted by `country`.
-
-If some documents are then found to have the same value in their `country` fields, they are re-sorted by `state`.
-
-Next, if some of these documents are found to have the same value in their `state` fields, they are re-sorted by `city`.
-
-Finally, if some of these documents are found to have the same value in their `city` fields, they are re-sorted by `score`, in _descending_ order.
-
-The following JSON query demonstrates how and where the `sort` property can be specified:
-
-[source,json]
-----
-{
-  "explain": false,
-  "fields": [
-    "title"
-  ],
-  "highlight": {},
-  "sort": ["country", "-_score","-_id"],
-  "query":{
-    "query": "beautiful pool"
-  }
-}
-----
-
-The following example shows how the `sort` field accepts _combinations_ of strings and objects as its value.
-
-[source,json]
-----
-{
-   ...
-   "sort": [
-      "country",
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Overall",
-       "mode" : "max",
-       "missing" : "last",
-        "type": "number"
-      },
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Location",
-       "mode" : "max",
-       "missing" : "last",
-       "type": "number"
-      },
-      "-_score"
-   ]
-}
-----
-
-=== Sorting with Objects
-
-Fine-grained control over sort-procedure can be achieved by specifying _objects_ as array-values in the `sort` field.
-
-Each object can have the following fields:
-
-* `by`: Sorts results on `id`, `score`, or a specified `field` in the Full Text Index.
-
-* `field`: Specifies the name of a field on which to sort.
-Used only if `field` has been specified as the value for the `by` field; otherwise ignored.
-
-* `missing`: Specifies the sort-procedure for documents with a missing value in a field specified for sorting.
-The value of `missing` can be `first`, in which case results with missing values appear _before_ other results; or `last` (the default), in which case they appear _after_.
-
-* `mode`: Specifies the search-order for index-fields that contain multiple values (in consequence of arrays or multi-token analyzer-output).
-The `default` order is undefined but deterministic, allowing the paging of results from `from (_offset_)`, with reliable ordering.
-To sort using the minimum or maximum value, the value of `mode` should be set to either `min` or `max`.
-
-* `type`: Specifies the type of the search-order field value. 
-For example, `string` for text fields, `date` for DateTime fields, or `number` for numeric/geo fields.
-
-To fetch more accurate sort results, we strongly recommend specifying the `type` of the sort fields in the sort section of the search request.
-
-==== Example
-
-The example below shows how to specify the object-sort.
-
-NOTE: The below sample assumes that the `travel-sample` bucket has been loaded, and a default index has been created on it.
-
-[source, json]
-----
-{
-  "explain": false,
-  "fields": [
-     "*"
-   ],
-   "highlight": {},
-   "query": {
-     "match": "bathrobes",
-     "field": "reviews.content",
-     "analyzer": "standard"
-   },
-   "size" : 10,
-   "sort": [
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Overall",
-       "mode" : "max",
-       "missing" : "last",
-       "type": "number"
-      }
-   ]
-}
-----
-
-For information on loading sample buckets, see xref:manage:manage-settings/install-sample-buckets.adoc[Sample Buckets]. For instructions on creating a default Full Text Index by means of the Couchbase Web Console, see xref:fts-creating-index-from-UI-classic-editor.adoc[Creating Index from UI].
-
-This query sorts search-results based on `reviews.ratings.Overall` — a field that is normally multi-valued because it contains an array of different users' ratings.
-
-When there are multiple values, the highest `Overall` ratings are used for sorting.
-
-Hotels with no `Overall` rating are placed at the end.
-
-The following example shows how the `sort` field accepts _combinations_ of strings and objects as its value.
-
-[source,json]
-----
-{
-   
-   "sort": [
-      "country",
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Overall",
-       "mode" : "max",
-       "missing" : "last",
-        "type": "number"
-      },
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Location",
-       "mode" : "max",
-       "missing" : "last",
-       "type": "number"
-      },
-      "-_score"
-   ]
-}
-----
-
-=== Sorting with Numeric
-
-You can specify the value of the `sort` field as a numeric type. You can use the `type` field in the object that you specify with the sort.
-
-With `type` field, you can specify the type of the search order to numeric, string, or DateTime.
-
-==== Example
-
-The example below shows how to specify the object-sort with type field as `number`.
-
-[source,json]
-----
-{
-  "explain": false,
-  "fields": [
-     "*"
-   ],
-   "highlight": {},
-   "query": {
-     "match": "bathrobes",
-     "field": "reviews.content",
-     "analyzer": "standard"
-   },
-   "size" : 10,
-   "sort": [
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Overall",
-       "mode" : "max",
-       "missing" : "last",
-       "type": "number"
-      }
-   ]
-}
-----
-
-=== Tips for Sorting with fields
-
-When you sort results on a field that is not indexed, or when a particular document is missing a value for that field, you will see the following series of Unicode non-printable characters appear in the sort field:
-
-`\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd` 
-
-The same characters may render differently when using a graphic tool or command line tools like `jq`.
-
-[source,json]
-----
-      "sort": [
-        "����������",
-        "hotel_9723",
-        "_score"
-      ]
-----
-
-Check your index definition to confirm that you are indexing all the fields you intend to sort by. You can control the sort behavior for missing attributes using the missing field.
-
-[#Scoring]
-== Scoring
-
-Search result scoring occurs at a query time. The result of the search request is ordered by *score* (relevance), with the descending sort order unless explicitly set not to do so.
-
-Couchbase uses a slightly modified version of the standard *tf-idf*  algorithm. This deviation is to normalize the score and is based on *tf-idf* algorithm.
-
-For more details on tf-idf, refer xref:#scoring-td-idf[tf-idf]
-
-By selecting the `explain score` option within the search request, you can obtain the explanation of how the score was calculated for a result alongside it.
-
-[#fts_explain_scoring_option_enabled]
-image::fts-td-idf-explain-scoring-enabled.png[,850,align=left]
-
-Search query scores all the qualified documents for relevance and applies relevant filters. 
-
-In a search request, you can set `score` to `none` to disable scoring by. See xref:#scoring-option-none[Score:none]
-
-=== Example
-
-The following sample query response shows the *score* field for each document retrieved for the query request:
-
-[source,json]
-----
-  "hits": [
-    {
-      "index": "DemoIndex_76059e8b3887351c_4c1c5584",
-      "id": "hotel_10064",
-      "score": 10.033205341869529,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    },
-    {
-      "index": "DemoIndex_76059e8b3887351c_4c1c5584",
-      "id": "hotel_10063",
-      "score": 10.033205341869529,
-      "sort": [
-        "_score"
-      ],
-      "fields": {
-        "_$c": "hotel"
-      }
-    }
-  ],
-  "total_hits": 2,
-  "max_score": 10.033205341869529,
-  "took": 284614211,
-  "facets": null
-}
-----
-
-[#scoring-td-idf]
-=== tf-idf
-
-`tf-idf`, a short form of *term frequency-inverse document frequency*, is a numerical statistical value that is used to reflect how important a word is to a document in collection or scope. 
-
-`tf-idf` is used as a weighting factor in a search for information retrieval and text mining. The `tf–idf` value increases proportionally to the number of times a word appears in the document, and it is offset by the number of documents in the collection or scope that contains the word.
-
-Search engines often use the variations of `tf-idf` weighting scheme as a tool in scoring and ranking a document's relevance for a given query. The tf-idf scoring for a document relevancy is done on the basis of per-partition index, which means that documents across different partitions may have different scores.
-
-When bleve scores a document, it sums a set of sub scores to reach the final score. The scores across different searches are not directly comparable as the scores are directly dependent on the search criteria. So, changing the search criteria, like terms, boost factor etc. can vary the score.
-
-The more conjuncts/disjuncts/sub clauses in a query can influence the scoring. Also, the score of a particular search result is not absolute, which means you can only use the score as a comparison to the highest score from the same search result. 
-
-FTS does not provide any predefined range for valid scores.
-
-In Couchbase application, you get an option to explore the score computations during any search in FTS.
-
-[#fts_explain_scoring_option]
-image::fts-td-idf-explain-scoring.png[,850,align=left]
-
-On the Search page, you can search for a term in any index. The search result displays the search records along with the option *Explain Scoring* to view the score deriving details for search hits and which are determined by using the `tf-idf` algorithm.
-
-[#fts_explain_scoring_option_enabled]
-image::fts-td-idf-explain-scoring-enabled.png[,850,align=left]
-
-[#scoring-option-none]
-=== Score:none
-
-You can disable the scoring by setting `score` to `none` in the search request. This is recommended in a situation where scoring (document relevancy) is not needed by the application.
-
-NOTE: Using `"score": "none"` is expected to boost query performance in certain situations. 
-
-==== Example
-
-[source, json]
-----
-{
-  "query": {
-      "match": "California",
-      "field": "state"
-  },
-  "score": "none",
-  "size": 100
-}
-----
-
-=== Scoring Tips and Recommendations
-
-For a select term, FTS calculates the relevancy score. So, the documents having a higher relevancy score automatically appear at the top in the result. 
-
-It is often observed that users are using Full-Text Search for the exact match queries with a bit of fuzziness or other search-specific capabilities like geo. 
-
-Text relevancy score does not matter when the user is looking for exact or more targeted searches with many predicates or when the dataset size is small.
-
-In such a case, FTS unnecessarily uses more resources in calculating the relevancy score. Users can, however, optimize the query performance by skipping the scoring. Users may skip the scoring by passing a “score”: “none” option in the search request. 
-
-==== Example
-
-[source,json]
-----
-{
- 
- "query": {},
- "score": "none",
- "size": 10,
- "from": 0
-}
-----
-
-This improves the search query performance significantly in many cases, especially for composite queries with many child search clauses.
-
-[#Highlighting]
-== Highlighting
-
-The `Highlight` object indicates whether highlighting was requested. 
-
-The pre-requisite includes term vectors and store options to be enabled at the field level to support Highlighting.
-
-The highlight object contains the following fields:
-
-* *style* - (Optional) Specifies the name of the highlighter. For example, "html"or "ansi".
-
-* *fields* - Specifies an array of field names to which Highlighting is restricted.
-
-=== Example 1
-
-As per the following example, when you search the content in the index, the matched content in the `address` field is highlighted in the search response.
-
-[source,console]
-----
-curl -u username:password -XPOST -H "Content-Type: application/json" \
-http://localhost:8094/api/index/travel-sample-index/query \
--d '{
-    "explain": true,
-    "fields": [
-        "*"
-    ],
-    "highlight": {    
-      "style":"html",  
-      "fields": ["address"]
-    }, 
-    "query": {
-        "query": "address:farm"
-    }
-}'
-----
-
-==== Result
-
-[#fts_highlighting_in_address_field]
-image::fts-highlighting-in-address-field.png[,520,align=left]
-
-=== Example 2
-
-As per the following example, when you search the content in the index, the matched content in the `description` field is highlighted in the search response.
-
-[source,console]
-----
-curl -u username:password -XPOST -H "Content-Type: application/json" \
-http://localhost:8094/api/index/travel-sample-index/query \
--d '{
-    "explain": true,
-    "fields": [
-        "*"
-    ],
-    "highlight": {    
-      "style":"html",  
-      "fields": ["description"]
-    }, 
-    "query": {
-        "query": "description:complementary breakfast"
-    }
-}'
-----
-
-==== Result
-
-[#fts_highlighting_in_description_field]
-image::fts-highlighting-in-description-field.png[,520,align=left]
-
-[#Fields]
-== Fields
-
-You can store specific document fields within FTS and retrieve those as a part of the search results.
-
-It involves the following two-step process:
-
-. *Indexing*
-+
-
-you need to specify the desired fields of the matching documents to be retrieved as a part of the index definition. To do so, select the "store" option checkbox in the field mapping definition for the desired fields. The FTS index will store the original field contents intact (without applying any text analysis) as a part of its internal storage.
-+
-
-For example, if you want to retrieve the field "description" in the document, then enable the "store" option like below.
-+
-
-[#fts-type-mappings-child-field]
-image::fts-type-mappings-child-field-dialog-complete.png[,460,align=left]
-+
-
-. *Searching*
-+
-you need to specify the fields to be retrieved in the "fields" setting within the search request. This setting takes an array of field names which will be returned as part of the search response. The field names must be specified as strings. While there is no field name pattern matching available, you can use an asterisk ("*") to specify that all stored fields be returned with the response. 
-+
-For retrieving the contents of the aforementioned "description" field, you may use the following search request.
-+
-
-----
-curl -XPOST -H "Content-Type: application/json" -uUsername:password http://host:port/api/index/FTS/query -d '{
-  "fields": ["description"],
-  "query": {"field": "queryFieldName", "match": "query text"},
-}'
-----
-
-[#Facets]
-
-== Search Facets
-
-Facets are aggregate information collected on a particular result set.
-For any search, the user can collect additional facet information along with it. 
-
-All the facet examples below, are for the query "[.code]``water``" on the beer-sample dataset.
-FTS supports the following types of facets:
-
-[#term-facet]
-=== Term Facet
-
-A term facet counts how many matching documents have a particular term for a specific field.
-
-NOTE: When building a term facet, use the keyword analyzer. Otherwise, multi-term values get tokenized, and the user gets unexpected results.
-
-==== Example
-
-* Term Facet - computes facet on the type field which has two values: `beer` and `brewery`.
-+
-----
-curl -X POST -H "Content-Type: application/json" \
-http://localhost:8094/api/index/bix/query -d \
-'{
-    "size": 10,
-    "query": {
-        "boost": 1,
-        "query": "water"
-     },
-    "facets": {
-         "type": {
-             "size": 5,
-             "field": "type"
-         }
-    }
-}'
-----
-+
-The result snippet below only shows the facet section for clarity.
-Run the curl command to see the HTTP response containing the full results.
-+
-[source,json]
-----
-"facets": {
-    "type": {
-        "field": "type",
-        "total": 91,
-        "missing": 0,
-        "other": 0,
-        "terms": [
-            {
-                "term": "beer",
-                "count": 70
-            },
-            {
-                "term": "brewery",
-                "count": 21
-            }
-        ]
-    }
-}
-----
-
-[#numeric-range-facet]
-=== Numeric Range Facet
-
-A numeric range facet works by the users defining their own buckets (numeric ranges).
-
-The facet then counts how many of the matching documents fall into a particular bucket for a particular field.
-
-==== Example
-
-* Numeric Range Facet - computes facet on the `abv` field with 2 buckets describing `high` (greater than 7) and `low` (less than 7).
-+
-----
-curl -X POST -H "Content-Type: application/json" \
-http://localhost:8094/api/index/bix/query -d \
-'{
-    "size": 10,
-    "query": {
-        "boost": 1,
-        "query": "water"
-    },
-    "facets": {
-        "abv": {
-            "size": 5,
-            "field": "abv",
-            "numeric_ranges": [
-                {
-                    "name": "high",
-                    "min": 7
-                },
-                {
-                    "name": "low",
-                    "max": 7
-                }
-             ]
-        }
-    }
-}'
-----
-+
-Results:
-+
-[source,json]
-----
-facets": {
-    "abv": {
-        "field": "abv",
-        "total": 70,
-        "missing": 21,
-        "other": 0,
-        "numeric_ranges": [
-            {
-                "name": "high",
-                "min": 7,
-                "count": 13
-            },
-            {
-                "name": "low",
-                "max": 7,
-                "count": 57
-            }
-        ]
-    }
-}
-----
-
-[#date-range-facet]
-=== Date Range Facet
-
-The Date Range facet is same as numeric facet, but on dates instead of numbers.
-
-Full text search and Bleve expect dates to be in the format specified by https://www.ietf.org/rfc/rfc3339.txt[RFC-3339^], which is a specific profile of ISO-8601 that is more restrictive.
-
-==== Example
-
-* Date Range Facet - computes facet on the ‘updated’ field that has 2 values old and new.
-+
-----
-curl -XPOST -H "Content-Type: application/json" -uAdministrator:asdasd http://:8094/api/index/bix/query -d '{
-"ctl": {"timeout": 0},
-"from": 0,
-"size": 0,
-"query": {
-            "field": "country",
-            "term": "united"
-},
-        "facets": {
-        "types": {
-        "size": 10,
-        "field": "updated",
-        "date_ranges": [
-        {
-        "name": "old",
-        "end": "2010-08-01"
-        },
-        {
-        "name": "new",
-        "start": "2010-08-01"
-        }
-]
-}
-}
-}'
-----
-+
-Results
-+
-[source,json]
-----
- "facets": {
-             "types": {
-                "field": "updated",
-                "total": 954,
-                "missing": 0,
-                "other": 0,
-             "date_ranges": [
-              {
-                "name": "old",
-                "end": "2010-08-01T00:00:00Z",
-                "count": 934
-              },
-              {
-                "name": "new",
-                "start": "2010-08-01T00:00:00Z",
-                "count": 20
-              }
-               ]
-             }
-           }
-----
-
-[#Collections]
-== Collections
-
-Collections field lets the user specify an optional list of collection names. 
-
-This would help the users scope their search request to only those specified collections within a multi-collection index.
-
-This becomes useful with multi-collection indexes as it can speed up searches, as well as the user can manage the Role Based Access Control more granularly with this option. Ie the search user only needs permissions for the requested collections and not for every other collection indexed within the index.
-
-In the absence of any collection names, the search request would be treated as a normal search request and would retrieve documents from all the indexed collections within the index.
-
-[#Includelocations]
-== IncludeLocations
-
-Search is capable of returning the array positions for the search term relative to the whole document hierarchical structure. If the user sets it to true then the search service returns the `array_positions` of the search term occurrences inside the document. The user has to enable the `term_vector` field option for the relevant field during the indexing for fetching the location details during the search time.
diff --git a/modules/fts/pages/fts-searching-from-N1QL.adoc b/modules/fts/pages/fts-searching-from-N1QL.adoc
deleted file mode 100644
index 24c64a1738..0000000000
--- a/modules/fts/pages/fts-searching-from-N1QL.adoc
+++ /dev/null
@@ -1,685 +0,0 @@
-= Searching from {sqlpp}
-
-Search functions enable you to use full text search queries directly within a {sqlpp} query.
-
-== Prerequisites
-
-To use any of the search functions, the Search service must be available on the cluster.
-It is also recommended, but not required, that you should create suitable full text indexes for the searches that you need to perform.
-
-[NOTE]
---
-The examples in this page all assume that demonstration full text indexes have been created.
---
-
-=== Authorization
-
-You do not need credentials for the FTS service to be able to use the search functions in a query.
-The role *Data Admin* must be assigned to those who intend to create indexes; and the role *Data Reader* to those who intend to perform searches.
-For information on creating users and assigning roles, see xref:learn:security/authorization-overview.adoc[Authorization].
-
-=== When to Use Search Functions
-
-The search functions are useful when you need to combine a full text search with the power of a {sqlpp} query; for example, combining joins and natural-language search in the same query.
-
-If you only need to use the capabilities of a full text search without any {sqlpp} features, consider making use of the Search service directly, through the user interface, the REST API, or an SDK.
-
-[[search,SEARCH()]]
-== SEARCH(`identifier`, `query`[, `options`])
-
-=== Description
-
-This function enables you to use a full text search to filter a result set, or as a join predicate.
-It is only allowed in the xref:n1ql-language-reference/where.adoc[WHERE] clause or the xref:n1ql-language-reference/join.adoc[ON] clause.
-
-If a query contains a SEARCH function, the Query engine analyzes the entire query, including the search specification, to select the best index to use with this search, taking any index hints into account.
-The Query engine then passes the search specification over to the Search engine to perform the search.
-
-[TIP]
---
-If no suitable full text index can be selected, or no full text index exists, the Query engine falls back on a Primary index or qualified GSI index to produce document keys, and then fetches the documents.
-The Search service then creates a temporary index in memory to perform the search.
-This process may be slower than using a suitable full text index.
---
-
-=== Arguments
-
-identifier::
-[Required] An expression in the form `__keyspaceAlias__[.__path__]`, consisting of the keyspace or keyspace alias in which to search, followed by the path to a field in which to search, using dot notation.
-+
-[NOTE]
---
-* The identifier must contain the keyspace or keyspace alias if there is more than one input source in the FROM clause.
-If there is only one input source in the FROM clause, and the identifier contains a path, the keyspace or keyspace alias may be omitted.
-However, if the path is omitted, the keyspace or keyspace alias is mandatory.
-
-* When the identifier contains a path, it is used as the default field in the _query_ argument, as long as the _query_ argument is a query string.
-If the path is omitted, the default field is set to `{underscore}all`.
-If the _query_ argument is a query string which specifies a field, this field takes priority, and the path in the identifier is ignored.
-Similarly, if the _query_ argument is a query object, the path is ignored.
-
-* The path must use Search syntax rather than {sqlpp} syntax; in other words, you cannot specify array locations such as `[*]` or `[3]` in the path.
-
-* If the keyspace, keyspace alias, or path contains any characters such as `-`, you must surround that part of the identifier with backticks `{backtick}{backtick}`.
---
-+
-The _identifier_ argument cannot be replaced by a {sqlpp} query parameter.
-
-query::
-[Required] The full text search query.
-This may be one of the following:
-+
-[cols="1a,4a", options="header"]
-|===
-| Type
-| Description
-
-| string
-| A query string.
-For more details, refer to xref:fts:fts-supported-queries-query-string-query.adoc[Query String Query].
-
-| object
-| The query object within a full text search request.
-For more details, refer to xref:fts:fts-supported-queries.adoc[Supported queries].
-
-| object
-| A complete full text search request, including sort and pagination options, and so on.
-For more details, refer to xref:fts:fts-sorting.adoc[Sorting Query Results].
-
-[NOTE]
-====
-When specifying a complete full text search request with the {sqlpp} SEARCH() function, if the value of the `size` parameter is greater than the maximum number of full text search results, the query ignores the `size` parameter and returns all matching results.
-
-This is different to the behavior of a complete full text search request in the Search service, where the query returns an error if the value of the `size` parameter is greater than the maximum number of full text search results.
-====
-|===
-+
-The _query_ argument may be replaced by a {sqlpp} query parameter, as long as the query parameter resolves to a string or an object.
-
-options::
-[Optional] A JSON object containing options for the search.
-The object may contain the following fields:
-+
-[cols="1a,1a,3a", options="header"]
-|===
-| Name
-| Type
-| Description
-
-| `index`
-[Optional]
-| string, object
-| The `index` field may be a string, containing the name of a full text index in the keyspace.
-(This may be a full text index alias, but only if the full text index is in the same keyspace.)
-This provides an index hint to the Query engine.
-If the full text index does not exist, an error occurs.
-
-[TIP]
---
-You can also provide an index hint to the Query engine with the xref:n1ql-language-reference/hints.adoc#use-index-clause[USE INDEX clause].
-This takes precedence over a hint provided by the `index` field.
---
-
-'''
-
-The `index` field may also be an object, containing an example of a full text index mapping.
-This is treated as an input to the index mapping.
-It overrides the default mapping and is used during index selection and filtering.
-
-The object must either have a default mapping with no type mapping, or a single type mapping with the default mapping disabled.
-For more information, refer to xref:fts:fts-creating-indexes.adoc[Creating Indexes].
-
-| `indexUUID`
-[Optional]
-| string
-| A string, containing the UUID of a full text index in the keyspace.
-This provides an index hint to the Query engine.
-If the full text index cannot be identified, an error occurs.
-
-You can use the `indexUUID` field alongside the `index` field to help identify a full text index.
-The `indexUUID` field and the `index` field must both identify the same full text index.
-If they identify different full text indexes, or if either of them does not identify a full text index, an error occurs.
-
-You can find the UUID of a full text index by viewing the index definition.
-You can do this using the xref:fts:fts-creating-index-from-UI-classic-editor.adoc#using-the-index-definition-preview[Index Definition Preview] in the Query Workbench, or the xref:rest-api:rest-fts-indexing.adoc[Index Definition] endpoints provided by the Full Text Search REST API.
-
-| `out`
-[Optional]
-| string
-| A name given to this full text search operation in this keyspace.
-You can use this name to refer to this operation using the <> and <> functions.
-If this field is omitted, the name of this full text search operation defaults to `"out"`.
-
-| (other)
-[Optional]
-| (any)
-| Other fields are ignored by the Query engine and are passed on to the Search engine as options.
-The values of these options may be replaced with {sqlpp} query parameters, such as `"analyzer": $analyzer`.
-|===
-
-+
-The _options_ argument cannot be replaced by a {sqlpp} query parameter, but it may contain {sqlpp} query parameters.
-
-=== Return Value
-
-A boolean, representing whether the search query is found within the input path.
-
-This returns `true` if the search query is found within the input path, or `false` otherwise.
-
-=== Limitations
-
-The Query service can select a full text index for efficient search in the following cases:
-
-* If the SEARCH() function is used in a WHERE clause or in an ANSI JOIN.
-The SEARCH() function must be on the leftmost (first) JOIN.
-It may be on the outer side of a nested-loop JOIN, or either side of a hash JOIN.
-RIGHT OUTER JOINs are rewritten as LEFT OUTER JOINs.
-
-* If the SEARCH() function is evaluated on the `true` condition in positive cases: for example, `SEARCH(_field_, _query_, _options_)`, `SEARCH(_field_, _query_, _options_) = true`, `SEARCH(_field_, _query_, _options_) IN [true, true, true]`, or a condition including one of these with `AND` or `OR`.
-
-The Query service cannot select a full text index for efficient search in the following cases:
-
-* If a USE KEYS hint is present; or if the SEARCH() function is used on the inner side of a nested-loop JOIN, a lookup JOIN or lookup NEST, an index JOIN or index NEST, an UNNEST clause, a subquery expression, a subquery result, or a correlated query.
-
-* If the SEARCH() function is evaluated on the `false` condition, or in negative cases: for example, `NOT SEARCH(_field_, _query_, _options_)`, `SEARCH(_field_, _query_, _options_) = false`, `SEARCH(_field_, _query_, _options_) != false`, `SEARCH(_field_, _query_, _options_) IN [false, true, 1, "a"]`, or in a condition using the relation operators `<`, `{lt}=`, `>`, `>=`, `BETWEEN`, `NOT`, `LIKE`, or `NOT LIKE`.
-
-In these cases, the Query service must fetch the documents, and the Search service creates a temporary index in memory to perform the search.
-This may affect performance.
-
-If the SEARCH() function is present for a keyspace, no GSI covering scan is possible on that keyspace.
-If more than one FTS or GSI index are used in the plan, IntersectScan or Ordered IntersectScan is performed.
-To avoid this, use a USE INDEX hint.
-
-Order pushdown is possible only if query ORDER BY has only <> on the leftmost keyspace.
-Offset and Limit pushdown is possible if the query only has a SEARCH() predicate, using a single search index -- no IntersectScan or OrderIntersectScan.
-Group aggregates and projection are not pushed.
-
-=== Examples
-
-.Search using a query string
-====
-The following queries are equivalent:
-
-[source,sqlpp]
-----
-SELECT META(t1).id
-FROM `travel-sample`.inventory.airline AS t1
-WHERE SEARCH(t1.country, "+United +States");
-----
-
-[source,sqlpp]
-----
-SELECT META(t1).id
-FROM `travel-sample`.inventory.airline AS t1
-WHERE SEARCH(t1, "country:\"United States\"");
-----
-
-.Results
-[source,json]
-----
-[
-
-  {
-    "id": "airline_10"
-  },
-  {
-    "id": "airline_10123"
-  },
-  {
-    "id": "airline_10226"
-  },
-  {
-    "id": "airline_10748"
-  },
-...
-]
-----
-
-The results are unordered, so they may be returned in a different order each time.
-====
-
-.Search using a query object
-====
-[source,sqlpp]
-----
-SELECT t1.name
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE SEARCH(t1, {
-  "match": "bathrobes",
-  "field": "reviews.content",
-  "analyzer": "standard"
-});
-----
-
-.Results
-[source,json]
-----
-[
-  {
-    "name": "Typoeth Cottage"
-  },
-  {
-    "name": "Great Orme Lighthouse"
-  },
-  {
-    "name": "New Road Guest House (B&B)"
-  },
-...
-]
-----
-
-The results are unordered, so they may be returned in a different order each time.
-====
-
-.Search using a complete full text search request
-====
-[source,sqlpp]
-----
-SELECT t1.name
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE SEARCH(t1, {
-  "explain": false,
-  "fields": [
-     "*"
-   ],
-   "highlight": {},
-   "query": {
-     "match": "bathrobes",
-     "field": "reviews.content",
-     "analyzer": "standard"
-   },
-   "size" : 5,
-   "sort": [
-      {
-       "by" : "field",
-       "field" : "reviews.ratings.Overall",
-       "mode" : "max",
-       "missing" : "last"
-      }
-   ]
-});
-----
-
-.Results
-[source,json]
-----
-[
-  {
-    "name": "Waunifor"
-  },
-  {
-    "name": "Bistro Prego With Rooms"
-  },
-  {
-    "name": "Thornehill Broome Beach Campground"
-  },
-...
-]
-----
-
-This query returns 5 results, and the results are ordered, as specified by the search options.
-As an alternative, you could limit the number of results and order them using the {sqlpp} xref:n1ql-language-reference/limit.adoc[LIMIT] and xref:n1ql-language-reference/orderby.adoc[ORDER BY] clauses.
-====
-
-.Search against a full text search index that carries a custom type mapping
-====
-[source,sqlpp]
-----
-SELECT META(t1).id
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE t1.type = "hotel" AND SEARCH(t1.description, "amazing");
-----
-
-.Results
-[source,json]
-----
-[
-  {
-    "id": "hotel_20422"
-  },
-  {
-    "id": "hotel_22096"
-  },
-  {
-    "id": "hotel_25243"
-  },
-  {
-    "id": "hotel_27741"
-  }
-]
-----
-
-If the full text search index being queried has its default mapping disabled and has a custom type mapping defined, the query needs to specify the type explicitly.
-
-//The above query uses the demonstration index xref:fts:fts-demonstration-indexes.adoc#travel-sample-index-hotel-description[travel-sample-index-hotel-description], which has the custom type mapping "hotel".
-
-For more information on defining custom type mappings within the full text search index, refer to xref:fts:fts-type-mappings.adoc[Type Mappings].
-Note that for {sqlpp} queries, only full text search indexes with one type mapping are searchable.
-Also the supported type identifiers at the moment are "type_field" and "docid_prefix"; "docid_regexp" isn't supported yet for SEARCH queries via {sqlpp}.
-====
-
-[[search_meta,SEARCH_META()]]
-== SEARCH_META([`identifier`])
-
-=== Description
-
-This function is intended to be used in a query which contains a <> function.
-It returns the metadata given by the Search engine for each document found by the <> function.
-If there is no <> function in the query, or if a full text index was not used to evaluate the search, the function returns MISSING.
-
-=== Arguments
-
-identifier::
-[Optional] An expression in the form `{startsb}__keyspaceAlias__.{endsb}__outname__`, consisting of the keyspace or keyspace alias in which the full text search operation was performed, followed by the outname of the full text search operation, using dot notation.
-
-[NOTE]
---
-* The identifier must contain the keyspace or keyspace alias if there is more than one input source in the FROM clause.
-If there is only one input source in the FROM clause, the keyspace or keyspace alias may be omitted.
-
-* The identifier must contain the outname if there is more than one <> function in the query.
-If there is only one <> function in the query, the identifier may be omitted altogether.
-
-* The outname is specified by the `out` field within the <> function's _options_ argument.
-If an outname was not specified by the <> function, the outname defaults to `"out"`.
-
-* If the keyspace or keyspace alias contains any characters such as `-`, you must surround that part of the identifier with backticks `{backtick}{backtick}`.
---
-
-=== Return Value
-
-A JSON object containing the metadata returned by the Search engine.
-By default, the metadata includes the score and ID of the search result.
-It may also include other metadata requested by advanced search options, such as the location of the search terms or an explanation of the search results.
-
-=== Examples
-
-.Select search metadata
-====
-[source,sqlpp]
-----
-SELECT SEARCH_META() AS meta -- <1>
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE SEARCH(t1, {
-  "query": {
-    "match": "bathrobes",
-    "field": "reviews.content",
-    "analyzer": "standard"
-  },
-  "includeLocations": true -- <2>
-})
-LIMIT 3;
-----
-
-.Result
-[source,json]
-----
-[
-  {
-    "meta": {
-      "id": "hotel_12068", // <3>
-      "locations": { // <4>
-        "reviews.content": {
-          "bathrobes": [
-            {
-              "array_positions": [
-                8
-              ],
-              "end": 664,
-              "pos": 122,
-              "start": 655
-            }
-          ]
-        }
-      },
-      "score": 0.3471730605306995 // <5>
-    }
-  },
-  {
-    "meta": {
-      "id": "hotel_18819",
-      "locations": {
-        "reviews.content": {
-          "bathrobes": [
-            {
-              "array_positions": [
-                6
-              ],
-              "end": 110,
-              "pos": 19,
-              "start": 101
-            }
-          ]
-        }
-      },
-      "score": 0.3778486940124847
-    }
-  },
-  {
-    "meta": {
-      "id": "hotel_5841",
-      "locations": {
-        "reviews.content": {
-          "bathrobes": [
-            {
-              "array_positions": [
-                0
-              ],
-              "end": 1248,
-              "pos": 242,
-              "start": 1239
-            }
-          ]
-        }
-      },
-      "score": 0.3696905918027607
-    }
-  }
-]
-----
-====
-
-<1> There is only one <> function in this query, so the SEARCH_META() function does not need to specify the outname.
-<2> The full text search specifies that locations should be included in the search result metadata.
-<3> The id is included in the search result metadata by default.
-<4> The location of the search term is included in the search result metadata as requested.
-<5> The score is included in the search result metadata by default.
-
-.Select the search metadata by outname
-====
-[source,sqlpp]
-----
-SELECT t1.name, SEARCH_META(s1) AS meta -- <1>
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE SEARCH(t1.description, "mountain", {"out": "s1"}) -- <2>
-AND SEARCH(t1, {
-  "query": {
-    "match": "bathrobes",
-    "field": "reviews.content",
-    "analyzer": "standard"
-  }
-});
-----
-
-.Results
-[source,json]
-----
-[
-  {
-    "name": "Marina del Rey Marriott"
-  }
-]
-----
-====
-
-<1> This query contains two <> functions.
-The outname indicates which metadata we want.
-<2> The outname is set by the _options_ argument in this <> function.
-This query only uses one data source, so there is no need to specify the keyspace.
-
-[[search_score,SEARCH_SCORE()]]
-== SEARCH_SCORE([`identifier`])
-
-=== Description
-
-This function is intended to be used in a query which contains a <> function.
-It returns the score given by the Search engine for each document found by the <> function.
-If there is no <> function in the query, or if a full text index was not used to evaluate the search, the function returns MISSING.
-
-This function is the same as <>.
-
-=== Arguments
-
-identifier::
-[Optional] An expression in the form `{startsb}__keyspaceAlias__.{endsb}__outname__`, consisting of the keyspace or keyspace alias in which the full text search operation was performed, followed by the outname of the full text search operation, using dot notation.
-
-[NOTE]
---
-* The identifier must contain the keyspace or keyspace alias if there is more than one input source in the FROM clause.
-If there is only one input source in the FROM clause, the keyspace or keyspace alias may be omitted.
-
-* The identifier must contain the outname if there is more than one <> function in the query.
-If there is only one <> function in the query, the identifier may be omitted altogether.
-
-* The outname is specified by the `out` field within the <> function's _options_ argument.
-If an outname was not specified by the <> function, the outname defaults to `"out"`.
-
-* If the keyspace or keyspace alias contains any characters such as `-`, you must surround that part of the identifier with backticks `{backtick}{backtick}`.
---
-
-=== Return Value
-A number reflecting the score of the result.
-
-=== Examples
-
-.Select the search score
-====
-
-[source,sqlpp]
-----
-SELECT name, description, SEARCH_SCORE() AS score -- <1>
-FROM `travel-sample`.inventory.hotel AS t1
-WHERE SEARCH(t1.description, "mountain")
-ORDER BY score DESC
-LIMIT 3;
-----
-
-.Results
-[source,json]
-----
-[
-  {
-    "description": "3 Star Hotel next to the Mountain Railway terminus and set in 30 acres of grounds which include Dolbadarn Castle",
-    "name": "The Royal Victoria Hotel"
-  },
-  {
-    "description": "370 guest rooms offering both water and mountain view.",
-    "name": "Marina del Rey Marriott"
-  },
-  {
-    "description": "This small family run hotel captures the spirit of Mull and is a perfect rural holiday retreat. The mountain and sea blend together to give fantastic, panoramic views from the hotel which is in an elevated position on the shoreline. Panoramic views are also available from the bar and restaurant which serves local produce 7 days a week.",
-    "name": "The Glenforsa Hotel"
-  }
-]
-----
-====
-
-<1> There is only one <> function in this query, so the SEARCH_SCORE() function does not need to specify the outname.
-
-== FTS FLEX (FTS + {sqlpp} Extended Support For Collections)
-
-FTS is capable of supporting multiple collections within a single index definition.
-
-Pre Couchbase Server 7.0 index definitions will continue to be supported with 7.0 FTS.
-
-If the user wants to set up an index definition to subscribe to just a few collections within a single scope, they will be able to do so by toggling the "doc_config.mode" to either of ["scope.collection.type_field", "scope.collection.docid_prefix"].
-
-The type mappings will now take the form of either "scope_name.collection_name" (to index all documents within that scope.collection) or "scope_name.collection_name.type_name" (to index only those documents within that scope.collection that match "type" = "type_name") . We will refer to FTS index definitions in this mode as collection-aware FTS indexes.
-
-NOTE: The type expression check within {sqlpp} queries becomes unnecessary with collection-aware FTS indexes.
-
-=== Example
-
-When you set up an FTS index definition to stream from 2 collections: landmark, hotel such as: 
-
-----
-{
-  "type": "fulltext-index",
-  "name": "travel",
-  "sourceType": "gocbcore",
-  "sourceName": "travel-sample",
-  "params": {
-    "doc_config": {
-      "mode": "scope.collection.type_field",
-      "type_field": "type"
-    },
-    "mapping": {
-      "analysis": {},
-      "default_analyzer": "standard",
-      "default_mapping": {
-        "dynamic": true,
-        "enabled": false
-      },
-      "types": {
-        "inventory.hotel": {
-          "enabled": true,
-          "properties": {
-            "reviews": {
-              "enabled": true,
-              "properties": {
-                "content": {
-                  "enabled": true,
-                  "fields": [
-                    {
-                      "analyzer": "keyword",
-                      "index": true,
-                      "name": "content",
-                      "type": "text"
-                    }
-                  ]
-                }
-              }
-            }
-          }
-        },
-        "inventory.landmark": {
-          "enabled": true,
-          "properties": {
-            "content": {
-              "enabled": true,
-              "fields": [
-                {
-                  "analyzer": "keyword",
-                  "index": true,
-                  "name": "content",
-                  "type": "text"
-                }
-              ]
-            }
-          }
-        }
-      }
-    }
-  }
-}
-----
-
-Below are some {sqlpp} queries targeting the above index definition.
-
-----
-SELECT META().id
-FROM `travel-sample`.`inventory`.`landmark` t USE INDEX(USING FTS)
-WHERE content LIKE "%travel%";
-----
-
-----
-SELECT META().id
-FROM `travel-sample`.`inventory`.`hotel` t USE INDEX(USING FTS)
-WHERE reviews.content LIKE "%travel%";
-----
-
-----
-SELECT META().id
-FROM `travel-sample`.`inventory`.`hotel` t USE INDEX(USING FTS)
-WHERE content LIKE "%travel%";
-----
diff --git a/modules/fts/pages/fts-searching-from-the-UI.adoc b/modules/fts/pages/fts-searching-from-the-UI.adoc
deleted file mode 100644
index 16372237f4..0000000000
--- a/modules/fts/pages/fts-searching-from-the-UI.adoc
+++ /dev/null
@@ -1,91 +0,0 @@
-[#Searching-from-the-UI]
-
-= Searching from the UI
-:page-aliases: searching-from-the-UI.adoc, fts-searching-from-the-ui.adoc
-
-[abstract]
-Full Text Search can be performed from the Couchbase Web Console.
-
-include::partial$fts-user-prerequisites-common.adoc[]
-
-[#fts-quick-start]
-== Access the Full Text Search User Interface
-
-To access the *Full Text Search* screen, left-click on the *Search* tab, in the navigation bar at the left-hand side:
-
-[#fts_select_search_tab]
-image::fts-select-search-tab.png[,100,align=left]
-
-The *Full Text Search* screen now appears, as follows:
-
-[#fts_fts_console_initial]
-image::fts-search-page.png[,,align=left]
-
-The console contains areas for the display of _indexes_ and _aliases_: but both are empty, since none has yet been created.
-
-[#fts_ensure_there_is_an_index]
-== Make sure you have a Search index
-
-If you don't have a Search index already you could create one against `travel-sample` as follows:
-
-** Creating a *One Field Index* xref:fts-creating-index-from-UI-classic-editor-onefield.adoc[via the UI] or xref:fts-creating-index-from-REST-onefield.adoc[via the REST API].
-
-** Creating a *Dynamic Index* xref:fts-creating-index-from-UI-classic-editor-dynamic.adoc[via the UI] or xref:fts-creating-index-from-REST-dynamic.adoc[via the REST API].
-
-** Creating a *Geopoint Index* xref:fts-creating-index-from-UI-classic-editor-geopoint.adoc[via the UI] or xref:fts-creating-index-from-REST-geopoint.adoc[via the REST API].
-
-The instructions provided will create an index named something like *travel-sample-index*, *test_dynamic*, or *test_geopoint* all indexes will work (the final one will also have the ability to search against geopoints). 
-
-[#Performing-Queries]
-== Perform a Query
-
-To perform a query, simply type a term into the interactive text-field that appears to the left of the *Search* button on the row for the index you have created.
-For example, `restaurant`.
-Then, left-click on the *Search* button:
-
-[#fts_ui_search_for_term]
-image::fts-ui-search-for-term.png[,400,align=left]
-
-A *Search Results* page now appears, featuring documents that contain the specified term:
-
-[#fts_ui_search_results_page]
-image::fts-ui-search-results-page.png[,,align=left]
-
-By left-clicking on any of the displayed document IDs, you bring up a display that features the entire contents of the document.
-
-== Advanced Query Settings and Other Features in the UI
-
-On the *Search Results* page, to the immediate right of the *Search* button, at the top of the screen, appears the *show advanced query settings* checkbox.
-Check this to display the advanced settings:
-
-[#fts_advanced_query_settings]
-image::fts-advanced-query-settings.png[,,align=left]
-
-Three interactive text-fields now appear underneath the *Search* panel: *Timeout (msecs)*, *Consistency Level*, and *Consistency Vector*.
-Additionally, the *JSON for Query Request* panel displays the submitted query in JSON format.
-Note the *show command-line curl example* checkbox, which when checked, adds to the initial JSON display, to form a completed curl command:
-
-[#fts_ui_curl_exammple]
-image::fts-ui-curl-example.png[,,align=left]
-
-This example can be copied by means of the *Copy to Clipboard* button, pasted (for example) into a standard console-window, and executed against the prompt.
-This feature therefore provides a useful means of extending experiments initially performed with the UI into a subsequent console-based, script-based, or program-based context.
-(Note, however, that the addition of credentials for authentication are required for execution of the statement outside the context of the current session within Couchbase Web Console.
-See xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API] for an example.)
-
-Note also the *Show Scoring* checkbox that appears prior to the entries in the *Results for travel-sample-index* panel.
-When this is checked, scores for each document in the list are provided.
-For example:
-
-[#fts_ui_query_scores_display]
-image::fts-ui-query-scores-display.png[,,align=left]
-
-Finally, note the *query syntax help* link that now appears under the *Search* interactive text-field:
-
-[#fts_query_syntax_help_linke]
-image::fts-query-syntax-help-link.png[,700,align=left]
-
-This link takes the user to the documentation on xref:fts-supported-queries.adoc[Supported Queries].
-Such a query can be specified in the *Search* interactive text-field, thereby allowing a search of considerable complexity to be accomplished within Couchbase Web Console.
-
-NOTE: Any supported query can be executed from the UI, meaning the UI can accept a valid string (query string syntax) or a JSON object conforming to a supported syntax (query or search request). However the result set will only contain document IDs along with the requested fields and scores (if applicable). Any array positions or facets' results will _NOT_ be displayed.
diff --git a/modules/fts/pages/fts-searching-full-text-indexes-aliases.adoc b/modules/fts/pages/fts-searching-full-text-indexes-aliases.adoc
deleted file mode 100644
index e5d34e197f..0000000000
--- a/modules/fts/pages/fts-searching-full-text-indexes-aliases.adoc
+++ /dev/null
@@ -1,31 +0,0 @@
-[#searching-full-text-indexes-aliases]
-= Searching Full Text Indexes/Aliases
-
-[abstract]
-Full Text indexes, are available under the *Search* tab of the Couchbase Web Console.
-
-Full Text indexes are special-purpose indexes that contain targets derived from the textual contents of the documents within one or more buckets or collections from the buckets. For more information about different types of indexes, see xref:learn:services-and-indexes/indexes/indexes.adoc[Indexes].
-
-You can access the Full Text Indexes from the *Search* tab. Left-click on this to display the *Full Text Search* panel, which contains a tabular presentation of currently existing indexes, with a row for each index.
-(See xref:fts-searching-from-the-UI.adoc[Searching from the UI] for a full illustration.) 
-
-On the same *Search* tab, you can create aliases for one or more indexes. So, if you perform the searches on the the aliases, you can get the result not just from one index but from more indexes associated with the aliases.
-
-To manage an index, left-click on its row. The row expands, as follows:
-
-[#fts_index_management_ui]
-image::fts-index-management-ui.png[,820,align=left]
-
-To manage alias, left-click on the alias row. The row expands, as follows:
-
-[#fts_alias_management_ui]
-image::fts-alias-management-ui.png[,820,align=left]
-
-The following buttons are displayed:
-
-* [.ui]*Search* searches the specified term in the designated index or alias.
-* [.ui]*Delete* causes the current index to be deleted.
-* [.ui]*Clone* brings up the *Clone Index* screen, which allows a copy of the current index to be modified as appropriate and saved under a new name.
-* [.ui]*Edit* brings up the *Edit Index* screen, which allows the index to be modified. Saving modifications cause the index to be rebuilt.
-+
-NOTE: Both the [.ui]*Edit Index* and [.ui]*Clone Index* screens are in most respects the same as the [.ui]*Add Index* screen, which was itself described in xref:fts-searching-from-the-UI.adoc[Searching from the UI].
\ No newline at end of file
diff --git a/modules/fts/pages/fts-searching-with-curl-http-requests.adoc b/modules/fts/pages/fts-searching-with-curl-http-requests.adoc
deleted file mode 100644
index 67a5e07d99..0000000000
--- a/modules/fts/pages/fts-searching-with-curl-http-requests.adoc
+++ /dev/null
@@ -1,163 +0,0 @@
-[#Searching-with-the-REST-API-(cURL/HTTP)]
-= Searching with the REST API (cURL/HTTP)
-:page-aliases: fts-searching-with-the-rest-api.adoc
-
-[abstract]
-Full Text Search can be performed using the Couchbase REST API (cURL/HTTP), at the command-line, through the `curl` utility.
-
-[#performing-a-full-text-search-with-rest-at-the-command-line]
-== Performing a Full Text Search with REST at the Command-Line
-
-The syntactic elements for a `curl`-based Full Text Search can be obtained from the Couchbase Web Console. The console allows searches performed via the UI to be translated dynamically into `curl` examples.
-
-Of course you need an existing index refer to either xref:fts-creating-index-from-UI-classic-editor.adoc[Classic Editor] or xref:fts-supported-queries-geo-spatial.adoc#creating_a_geospatial_geopoint_index[Creating a Geospatial Index (type geopoint)] to create an index named something like *travel-sample-index* or *test_geopoint* either index will work either index will work (the latter will also have the ability to search against geopoints).
-
-To demonstrate this, follow the procedures for accessing the Full Text Search screen, within the Couchbase Web Console, and for performing a simple search; as described in xref:fts-searching-from-the-UI.adoc[Searching from the UI]. Then, left-click on the *show advanced query settings* checkbox, at the right-hand side of the *Search* button:
-
-[#fts_advanced_query_settings]
-image::fts-advanced-query-settings.png[,,align=left]
-
-The *JSON for Query Request* panel displays the submitted query in JSON format.
-Note the *show command-line curl example* checkbox. Selecting this checkbox adds to the content of the initial JSON display to form a completed curl command:
-
-[#fts_ui_curl_exammple]
-image::fts-ui-curl-example.png[,,align=left]
-
-This example can be copied by means of the *Copy to Clipboard* button, pasted into (for example) a standard console-window, and executed against the prompt.
-This feature , therefore, provides a useful means of extending experiments initially performed with the UI into a subsequent console-based, script-based, or program-based context.
-Note, however, that authentication is required for the call to be successful from any context outside the current Couchbase Web Console session.
-Additionally, familiarity with _query strings_ should be acquired for the creation of more complex queries.
-
-[#using-query-strings]
-== Query Strings and Authentication
-
-A _Query String_ combines standard alphanumeric characters with syntactic elements in order to specify complex queries in ASCII form.
-Query Strings can be used for Full Text Searches performed with both the Couchbase Web Console and the REST API.
-A detailed description of Query String-format is provided in xref:fts-supported-queries.adoc[Supported Queries].
-
-For example, to search for instances of both `nice` and `view`, specify `"+nice +view"` in a search from the Couchbase Web Console:
-
-[#fts_query_string_query_at_ui]
-image::fts-query-string-query-at-ui.png[,640,align=left]
-
-When the search has returned, check in succession the *show advanced query settings* and *show command-line curl example* checkboxes.
-The *JSON for Query Request* now displays the following:
-
-[#fts_query_string_results_at_ui]
-image::fts-query-string-results-at-ui.png[,,align=left]
-
-Copy the `curl` command displayed by left-clicking on the *Copy to Clipboard* button.
-Before attempting to execute the command from the command-line, paste it into a text-editor, and add appropriate authentication-credentials.
-For example:
-
-[source,bourne]
-----
-curl -XPOST -H "Content-Type: application/json" \
--u : http://localhost:8094/api/index/test_geopoint/query \
--d '{
-  "explain": true,
-  "fields": [
-    "*"
-  ],
-  "highlight": {},
-  "query": {
-    "query": "{+nice +view}"
-  },
-  "size": 10,
-  "from": 0
-}'
-----
-
-(For detailed information on Couchbase Server _Role-Based Access Control_, see xref:learn:security/authorization-overview.adoc[Authorization].)
-
-The code can now be copied again and pasted against the command-line, and executed, with the result-set appearing as standard output.
-
-For additional assistance on Query String composition, left-click on the *full text query syntax help* link that appears under the *Search* interactive text-field when *show advanced query settings* is checked:
-
-[#fts_query_syntax_help_linke]
-image::fts-query-syntax-help-link.png[,640,align=left]
-
-This link provides access to a xref:query-string-queries.adoc[page] of information on _Query String_ Full Text Search queries.
-
-[#searching-specifically]
-== Searching Specifically
-
-Searches should always be as specific as possible: this helps to avoid excessive resource-consumption, and the retrieval of unnecessarily large amounts of data.
-To facilitate this, the number of _clauses_ that can be returned by a Search Service query is deliberately capped at _1024_: if a larger number of clauses is to be returned by a query, an error is thrown.
-
-For example, the following query attempts to use the wildcard `*`, to return all data from documents' `reviews.content` field.
-The output is piped to the http://stedolan.github.io/jq[jq] program, to enhance readability:
-
-[source, console]
-----
-curl -XPOST -H "Content-Type: application/json" \
--u : http://localhost:8094/api/index/test_geopoint/query \
--d '{
-  "explain": true,
-  "fields": [
-    "*"
-  ],
-  "highlight": {},
-  "query": {
-    "wildcard": "aa*",
-    "field": "reviews.content"
-  },
-  "size": 10,
-  "from": 0
-}' | jq '.'
-----
-
-Due to the excessive number of clauses that this query would return, an error is thrown.
-The error-output (along with the request parameters) is as follows:
-
-[source, json]
-----
-{
-  "error": "rest_index: Query, indexName: test_geopoint, err: TooManyClauses over field: `reviews.content` [21579 > maxClauseCount, which is set to 1024]",
-  "request": {
-    "explain": true,
-    "fields": [
-      "*"
-    ],
-    "from": 0,
-    "highlight": {},
-    "query": {
-      "field": "reviews.content",
-      "wildcard": "*"
-    },
-    "size": 10
-  },
-  "status": "fail"
-}
-----
-
-Therefore, to fix the problem, the wildcard match should be more precisely specified, and the query re-attempted.  For example adjusting the *wildcard* specification to *"aapass:[*]"* will result in a query that succeeds.
-
-[source, console]
-----
-curl -XPOST -H "Content-Type: application/json" \
--u : http://localhost:8094/api/index/test_geopoint/query \
--d '{
-  "explain": true,
-  "fields": [
-    "*"
-  ],
-  "highlight": {},
-  "query": {
-    "wildcard": "aa*",
-    "field": "reviews.content"
-  },
-  "size": 10,
-  "from": 0
-}' | jq '.'
-----
-
-[#further-rest-examples]
-== Further REST Examples
-
-Further examples of using the REST API to conduct Full Text Searches can be found in xref:fts-supported-queries.adoc[Supported Queries].
-
-[#list-of-rest-features-supporting-full-text-search]
-== List of REST Features Supporting Full Text Search
-
-The full range of features for Full Text Search, as supported by the Couchbase REST API, is documented as part of the REST API's reference information on the page xref:rest-api:rest-fts.adoc[Full Text Search API].
diff --git a/modules/fts/pages/fts-searching-with-sdk.adoc b/modules/fts/pages/fts-searching-with-sdk.adoc
deleted file mode 100644
index 3cf94272af..0000000000
--- a/modules/fts/pages/fts-searching-with-sdk.adoc
+++ /dev/null
@@ -1,67 +0,0 @@
-= Searching with SDK
-
-[.column]
-=== {empty}
-[.content]
-Couchbase provides several SDKs to allow applications to access a Couchbase cluster and Mobile SDKs to carry the application to the edge. 
-
-
-.Links to various SDK documentation
-
-[[analyzer_languages_5.5]]
-[cols="1,4,4"]
-|===
-| SDK | Details | Link
-
-|C SDK
-|The Couchbase C SDK (`libcouchbase`) enables C and C++ programs to access a Couchbase Server cluster.
-The C SDK is also commonly used as a core dependency of SDKs written in other languages to provide a common implementation and high performance.
-Libcouchbase also contains the `cbc` suite of command line tools.
-|xref:3.3@c-sdk:howtos:full-text-searching-with-sdk.adoc[C SDK 3.3]
-
-| .NET SDK
-| The .NET SDK enables you to interact with a Couchbase Server cluster from the .NET Framework using any Common Language Runtime (CLR) language, including C#, F#, and VB.NET. 
-It offers both a traditional synchronous API and an asynchronous API based on the Task-based Asynchronous Pattern (TAP).
-|xref:3.3@dotnet-sdk:howtos:full-text-searching-with-sdk.adoc[.NET SDK 3.3]
-
-|Go SDK
-|The Couchbase Go SDK allows you to connect to a Couchbase Server cluster from Go.
-The Go SDK is a native Go library and uses the high-performance `gocbcore` to handle communicating to the cluster over Couchbase's binary protocols.
-|xref:2.5@go-sdk:howtos:full-text-searching-with-sdk.adoc[Go SDK 2.5]
-
-| Java SDK
-| The Java SDK forms the cornerstone of our JVM clients.
-It allows Java applications to access a Couchbase Server cluster.
-The Java SDK offers traditional synchronous APIs and scalable asynchronous APIs to maximize performance.
-|xref:3.3@java-sdk:howtos:full-text-searching-with-sdk.adoc[Java SDK 3.3]
-
-| Kotlin SDK
-| Our new Kotlin SDK allows Kotlin applications to access a Couchbase Server cluster.
-|xref:1.0@kotlin-sdk:howtos:full-text-search.adoc[Kotlin SDK 1.0]
-
-|Node.js SDK
-|he Node.js SDK allows you to connect to a Couchbase Server cluster from Node.js.
-The Node.js SDK is a native Node.js module using the very fast `libcouchbase` library to handle the communication with the cluster over the Couchbase binary protocol.
-|xref:4.1@nodejs-sdk:howtos:full-text-searching-with-sdk.adoc[Node.js SDK 4.1]
-
-|PHP SDK
-|The PHP SDK allows you to connect to a Couchbase Server cluster from PHP.
-The PHP SDK is a native PHP extension and uses the Couchbase high-performance C library `libcouchbase` to handle the communication to the cluster over Couchbase binary protocols.
-|xref:4.0@php-sdk:howtos:full-text-searching-with-sdk.adoc[PHP SDK 4.0]
-
-|Python SDK
-|The Python SDK allows Python applications to access a Couchbase Server cluster.
-The Python SDK offers a traditional synchronous API and integration with twisted, gevent, and asyncio.
-It depends on the C SDK (`libcouchbase`) and utilizes it for performance and reliability.
-|xref:4.0@python-sdk:howtos:full-text-searching-with-sdk.adoc[Python SDK 4.0]
-
-|Ruby SDK
-
-|The Ruby SDK allows Ruby applications to access a Couchbase Server cluster. 
-The Ruby SDK includes high-performance native Ruby extensions to handle communicating to the cluster over Couchbase's binary protocols.
-|xref:3.3@ruby-sdk:howtos:full-text-searching-with-sdk.adoc[Ruby SDK 3.3]
-
-| Scala SDK
-| Our new Scala SDK allows Scala applications to access a Couchbase Server cluster.
-It offers synchronous, asynchronous, and reactive APIs for flexibility and maximum performance.
-|xref:1.3@scala-sdk:howtos:full-text-searching-with-sdk.adoc[Scala SDK 1.3]
diff --git a/modules/fts/pages/fts-secure-fts-queries.adoc b/modules/fts/pages/fts-secure-fts-queries.adoc
deleted file mode 100644
index cd88b5ae57..0000000000
--- a/modules/fts/pages/fts-secure-fts-queries.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-= Searching Securely Using SSL
-
-To securely query data from the FTS service, the user must follow these steps:
-
-1. Provide the username and password (-u).
-2. Use https protocol.
-3. Specify the IP address of the server hosting the FTS service - .
-4. Specify the SSL port (18094). 
-
-*Example*
-
-[source,console]
-----
-curl -u username:password -XPOST -H "Content-Type: application/json" \
-https://:18094/api/index/travel-sample-index/query \
--d '{
-        "explain": true,
-        "fields": [" * "],
-        "highlight": {},
-        "query": {
-                    "query": "{ \"+nice +view\" }"
-                 }
-    }'
-----
-
-NOTE: Ensure that the SSL ports are enabled in the cluster.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-analytic-query.adoc b/modules/fts/pages/fts-supported-queries-analytic-query.adoc
deleted file mode 100644
index 4294c64503..0000000000
--- a/modules/fts/pages/fts-supported-queries-analytic-query.adoc
+++ /dev/null
@@ -1,11 +0,0 @@
-= Analytic Queries
-
-Use an Analytic query to apply an analyzer to your search request.
-If you don't provide an analyzer, the Search Service uses the analyzer from the Search index. 
-
-The following queries are Analytic queries:
-
-* xref:fts-supported-queries-match.adoc[Match]
-* xref:fts-supported-queries-match-phrase.adoc[Match Phrase]
-
-For information on analyzers, see xref:fts-index-analyzers.adoc[Understanding Analyzers].
diff --git a/modules/fts/pages/fts-supported-queries-boolean-field-query.adoc b/modules/fts/pages/fts-supported-queries-boolean-field-query.adoc
deleted file mode 100644
index d97517af8d..0000000000
--- a/modules/fts/pages/fts-supported-queries-boolean-field-query.adoc
+++ /dev/null
@@ -1,20 +0,0 @@
-= Boolean Query
-
-A _boolean query_ is a combination of conjunction and disjunction queries.
-A boolean query takes three lists of queries:
-
-* `must`: Result documents must satisfy all of these queries.
-* `should`: Result documents should satisfy these queries.
-* `must not`: Result documents must not satisfy any of these queries.
-
-[source,json]
-----
-{
- "must": {
-   "conjuncts":[{"field":"reviews.content", "match": "location"}]},
- "must_not": {
-   "disjuncts": [{"field":"free_breakfast", "bool": false}]},
- "should": {
-   "disjuncts": [{"field":"free_breakfast", "bool": true}]}
-}
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-compound-query.adoc b/modules/fts/pages/fts-supported-queries-compound-query.adoc
deleted file mode 100644
index fddf5cb20b..0000000000
--- a/modules/fts/pages/fts-supported-queries-compound-query.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-= Compound Queries
-
-Compound Queries:: Accept multiple queries simultaneously, and return either the _conjunction_ of results from the result-sets, or a _disjunction_.
-
-The following queries are compound queries:
-
-* xref:fts-supported-queries-conjuncts-disjuncts.adoc[Conjuncts & Disjuncts]
-* xref:fts-supported-queries-boolean-field-query.adoc[Boolean]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-conjuncts-disjuncts.adoc b/modules/fts/pages/fts-supported-queries-conjuncts-disjuncts.adoc
deleted file mode 100644
index e55d6c790b..0000000000
--- a/modules/fts/pages/fts-supported-queries-conjuncts-disjuncts.adoc
+++ /dev/null
@@ -1,40 +0,0 @@
-= Conjunction & Disjunction Query
-
-== Conjunction Query (AND)
-
-A _conjunction_ query contains multiple _child queries_.
-Its result documents must satisfy all of the child queries.
-
-[source,json]
-----
-{
- "conjuncts":[
-   {"field":"reviews.content", "match": "location"},
-   {"field":"free_breakfast", "bool": true}
- ]
-}
-----
-
-A demonstration of a conjunction query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-== Disjunction Query (OR)
-
-A _disjunction_ query contains multiple _child queries_.
-Its result documents must satisfy a configurable `min` number of child queries.
-By default this `min` is set to 1.
-For example, if three child queries — A, B, and C — are specified, a `min` of 1 specifies that the result documents should be those returned uniquely for A (with all returned uniquely for B and C, and all returned commonly for A, B, and C, omitted).
-
-[source,json]
-----
-{
- "disjuncts":[
-   {"field":"reviews.content", "match": "location"},
-   {"field":"free_breakfast", "bool": true}
- ]
-}
-----
-
-A demonstration of a disjunction query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-
-
diff --git a/modules/fts/pages/fts-supported-queries-date-range.adoc b/modules/fts/pages/fts-supported-queries-date-range.adoc
deleted file mode 100644
index 72cd747d3a..0000000000
--- a/modules/fts/pages/fts-supported-queries-date-range.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-= Date Range Query
-
-A _date_range_ query finds documents containing a date value, in the specified field within the specified range. 
-
-Dates should be in the format specified by RFC-3339, which is a specific profile of ISO-8601. 
-
-Define the endpoints using the fields [.param]`start` and [.param]`end`. 
-You can omit any one endpoint, but not both.
-
-The [.param]`inclusive_start` and [.param]`inclusive_end` properties in the query JSON control whether or not the endpoints are included or excluded.
-
-== Example
-
-[source,json]
-----
-{
- "start": "2001-10-09T10:20:30-08:00",
- "end": "2016-10-31",
- "inclusive_start": false,
- "inclusive_end": false,
- "field": "review_date"
-}
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-fuzzy.adoc b/modules/fts/pages/fts-supported-queries-fuzzy.adoc
deleted file mode 100644
index b94e8b37ad..0000000000
--- a/modules/fts/pages/fts-supported-queries-fuzzy.adoc
+++ /dev/null
@@ -1,24 +0,0 @@
-= Fuzzy Query
-
-A _fuzzy query_ matches terms within a specified _edit_ (or _Levenshtein_) distance: meaning that terms are considered to match when they are to a specified degree _similar_, rather than _exact_.
-A common prefix of a stated length may be also specified as a requirement for matching.
-
-NOTE: The fuzzy query is a non-analytic query, meaning it won't perform any text analysis on the query text.
-
-Fuzziness is specified by means of a single integer.
-A value of `0` indicates that the terms must be identical.
-The maximum value that you can specify is `2`.
-For example:
-
-[source,json]
-----
-{
- "term": "interest",
- "field": "reviews.content",
- "fuzziness": 2
-}
-----
-
-A demonstration of __Fuzziness__ using the Java SDK, in the context of the _term query_ (see below) can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-NOTE: Two such queries are specified, with the difference in fuzziness between them resulting in different forms of match, and different sizes of result-sets.
diff --git a/modules/fts/pages/fts-supported-queries-geo-bounded-polygon.adoc b/modules/fts/pages/fts-supported-queries-geo-bounded-polygon.adoc
deleted file mode 100644
index 49ce835ac3..0000000000
--- a/modules/fts/pages/fts-supported-queries-geo-bounded-polygon.adoc
+++ /dev/null
@@ -1,71 +0,0 @@
-= Creating a Query: Polygon-Based
-
-Note that a detailed example for Geopoint index creation and also executing queries can be found at xref:fts-supported-queries-geopoint-spatial.adoc#creating_a_geospatial_geopoint_index[Geopoint Index Creation] and running queries xref:fts-supported-queries-geopoint-spatial.adoc#creating_geopoint_rest_query_radius_based[Geopoint Radius Queries].
-
-In addition detailed information on performing queries with the Search REST API can be found in xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API]; which shows how to use the full `curl` command and how to incorporate query-bodies into your cURL requests.
-
-The following query-body uses an array, each of whose elements is a string, containing two floating-point numbers; to specify the latitude and longitude of each of the corners of a polygon --; known as _polygon points_.
-In each string, the `lat` floating-point value precedes the `lon.`
-
-Here, the last-specified string in the array is identical to the initial string, thus explicitly closing the box.
-However, specifying an explicit closure in this way is optional: the closure will be inferred by the Couchbase Server if not explicitly specified.
-
-If a target data-location falls within the box, its document is returned.
-The results are specified to be sorted on `name` alone.
-
-[source,json]
-----
-{
-  "query": {
-    "field": "geo",
-    "polygon_points": [
-      "37.79393211306212,-122.44234633404847",
-      "37.77995881733997,-122.43977141339417",
-      "37.788031092020155,-122.42925715405579",
-      "37.79026946582319,-122.41149020154114",
-      "37.79571192027403,-122.40735054016113",
-      "37.79393211306212,-122.44234633404847"
-    ]
-  },
-  "sort": [
-    "name"
-  ]
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source,json]
-----
-    .
-    .
-    .
-"hits": [
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_25944",
-    "score": 0.23634379439298683,
-    "sort": [
-      "4"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_25681",
-    "score": 0.31367419004657393,
-    "sort": [
-      "alta"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_25686",
-    "score": 0.31367419004657393,
-    "sort": [
-      "atherton"
-    ]
-  },
-        .
-        .
-        .
-----
diff --git a/modules/fts/pages/fts-supported-queries-geo-bounded-rectangle.adoc b/modules/fts/pages/fts-supported-queries-geo-bounded-rectangle.adoc
deleted file mode 100644
index 38b6c5ffdc..0000000000
--- a/modules/fts/pages/fts-supported-queries-geo-bounded-rectangle.adoc
+++ /dev/null
@@ -1,73 +0,0 @@
-= Creating a Query: Rectangle-Based
-
-Note that a detailed example for Geopoint index creation and also executing queries can be found at xref:fts-supported-queries-geopoint-spatial.adoc#creating_a_geospatial_geopoint_index[Geopoint Index Creation] and running queries xref:fts-supported-queries-geopoint-spatial.adoc#creating_geopoint_rest_query_radius_based[Geopoint Radius Queries].
-
-In addition detailed information on performing queries with the Search REST API can be found in xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API]; which shows how to use the full `curl` command and how to incorporate query-bodies into your cURL requests.
-
-In the following query-body, the `top_left` of a rectangle is expressed by means of an array of two floating-point numbers, specifying a longitude of `-2.235143` and a latitude of `53.482358`.
-The `bottom_right` is expressed by means of key-value pairs, specifying a longitude of `28.955043` and a latitude of `40.991862`.
-The results are specified to be sorted on `name` alone.
-
-[source,json]
-----
-{
-  "from": 0,
-  "size": 10,
-  "query": {
-    "top_left": [-2.235143, 53.482358],
-    "bottom_right": {
-      "lon": 28.955043,
-      "lat": 40.991862
-     },
-    "field": "geo"
-  },
-  "sort": [
-    "name"
-  ]
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source,json]
-----
-          .
-          .
-          .
-"hits": [
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_16144",
-    "score": 0.004836809397039384,
-    "sort": [
-      "02"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "hotel_9905",
-    "score": 0.01625607942050202,
-    "sort": [
-      "1"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "hotel_16460",
-    "score": 0.004836809397039384,
-    "sort": [
-      "11"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "hotel_21674",
-    "score": 0.010011952055063241,
-    "sort": [
-      "17"
-    ]
-  },
-          .
-          .
-          .
-----
diff --git a/modules/fts/pages/fts-supported-queries-geo-point-distance.adoc b/modules/fts/pages/fts-supported-queries-geo-point-distance.adoc
deleted file mode 100644
index 5c6158525a..0000000000
--- a/modules/fts/pages/fts-supported-queries-geo-point-distance.adoc
+++ /dev/null
@@ -1,82 +0,0 @@
-= Creating a Query: Radius-Based
-
-Note that a detailed example for Geopoint index creation and also executing queries can be found at xref:fts-supported-queries-geopoint-spatial.adoc#creating_a_geospatial_geopoint_index[Geopoint Index Creation] and running queries xref:fts-supported-queries-geopoint-spatial.adoc#creating_geopoint_rest_query_radius_based[Geopoint Radius Queries].
-
-In addition detailed information on performing queries with the Search REST API can be found in xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API]; which shows how to use the full `curl` command and how to incorporate query-bodies into your cURL requests.
-
-The following query-body specifies a longitude of `-2.235143` and a latitude of `53.482358`.
-The target-field `geo` is specified, as is a `distance` of `100` miles: this is the radius within which target-locations must reside for their documents to be returned.
-
-[source,json]
-----
-{
-  "from": 0,
-  "size": 10,
-  "query": {
-    "location": {
-      "lon": -2.235143,
-      "lat": 53.482358
-     },
-      "distance": "100mi",
-      "field": "geo"
-    },
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geo",
-      "unit": "mi",
-      "location": {
-      "lon": -2.235143,
-      "lat": 53.482358
-      }
-    }
-  ]
-}
-----
-
-The query contains a `sort` object, which specifies that the returned documents should be ordered in terms of their _geo_distance_ from specified `lon` and `lat` coordinates: these values need not be identical to those specified in the `query` object.
-
-A subset of formatted console output might appear as follows:
-
-[source,json]
-----
-            .
-            .
-            .
-"hits": [
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_17411",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?E#9>N\f\"e"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_17409",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?O~i*(kD,"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "landmark_17403",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?Sg*|/t\u001f\u0002"
-    ]
-  },
-  {
-    "index": "test_geopoint_610cbb5808dfd319_4c1c5584",
-    "id": "hotel_17413",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?U]S\\.e\u0002_"
-    ]
-  },
-            .
-            .
-            .
-----
diff --git a/modules/fts/pages/fts-supported-queries-geojson-spatial.adoc b/modules/fts/pages/fts-supported-queries-geojson-spatial.adoc
deleted file mode 100644
index 8d0b967642..0000000000
--- a/modules/fts/pages/fts-supported-queries-geojson-spatial.adoc
+++ /dev/null
@@ -1,1059 +0,0 @@
-= Geospatial GeoJSON Queries
-
-[abstract]
-_GeoJSON_ queries return documents that contain location in either legacy Geopoint format or standard GeoJSON, thus providing more utility than that of legacy  point-distance, bounded-rectangle and bounded-polygon against the indexed Geopoint fields.
-
-include::partial$fts-geojson-intro-common.adoc[]
-
-[#prerequisites-dataset]
-
-== Prerequisites - Modify the travel-sample dataset
-
-The `travel-sample` bucket, provided for test and development, DOES NOT contains any GeoJSON constructs in the documents (only legacy Geopoint information) as such you will need to modify the `travel-sample` data to work with GeoJSON.
-
-A dataset modification can be easily accomplished via either
-
-* Adding a new GeoJSON object(s) to your documents.
-
-* Converting Geopoints to GeoJSON Point types in your documents.
-
-To run the examples in this documentation the first update method *"Adding a new GeoJSON object(s) to your documents"* is needed.
-
-* Example documents that have a geo field (airports, hotels or landmarks) such as `airport_1254` in `travel-sample._default._default`:
-+
-[source, json]
-----
-{
-  "airportname": "Calais Dunkerque",
-  "city": "Calais",
-  "country": "France",
-  "faa": "CQF",
-  "geo": {
-    "alt": 12,
-    "lat": 50.962097,
-    "lon": 1.954764
-  },
-  "icao": "LFAC",
-  "id": 1254,
-  "type": "airport",
-  "tz": "Europe/Paris"
-}
-----
-
-* *Adding a new GeoJSON object(s) (required for running the examples)*
-+
-Using SQL++ (or {sqlpp}) in the Query Workbench we can quickly read the "geo" objects in `travel-sample._default._default` and generate and add a new geojson object to each document.   In addition the second statement will add a higher level GeoJSON (Couchbase addition to the spec) object representing a 10 mile radius around each airport (only for type=airport).
-+
-[source, n1ql]
-----
-UPDATE `travel-sample`._default._default
-    SET geojson = { "type": "Point", "coordinates": [geo.lon, geo.lat] }
-    WHERE geo IS NOT null;
-
-UPDATE  `travel-sample`._default._default
-    SET geoarea = { "coordinates": [geo.lon, geo.lat], "type": "circle", "radius": "10mi"}
-    WHERE geo IS NOT null AND type="airport";
-----
-+
-After running the above conversion we would get updated documents for airports like (hotels and landmarks will not have a geoarea sub-object):
-+
-[source, json]
-----
-{
-  "airportname": "Calais Dunkerque",
-  "city": "Calais",
-  "country": "France",
-  "faa": "CQF",
-  "geo": {
-    "alt": 12,
-    "lat": 50.962097,
-    "lon": 1.954764
-  },
-  "geoarea": {
-    "coordinates": [
-      1.954764,
-      50.962097
-    ],
-    "radius": "10mi",
-    "type": "circle"
-  },
-  "geojson": {
-    "coordinates": [
-      1.954764,
-      50.962097
-    ],
-    "type": "Point"
-  },
-  "icao": "LFAC",
-  "id": 1254,
-  "type": "airport",
-  "tz": "Europe/Paris"
-}
-----
-
-* *Converting Geopoints (for reference only, do not use for the examples)*
-+
-Using SQL++ (or {sqlpp}) in the Query Workbench we can quickly convert all top level "geo" objects in `travel-sample._default._default`
-+
-[source, n1ql]
-----
-UPDATE `travel-sample`._default._default
-    SET geo.type = "Point", geo.coordinates = [geo.lon, geo.lat] WHERE geo IS NOT null;
-
-UPDATE `travel-sample`._default._default
-    UNSET geo.lat, geo.lon WHERE geo IS NOT null;
-----
-+
-After running the above conversion we would get updated documents like:
-+
-[source, json]
-----
-{
-  "airportname": "Calais Dunkerque",
-  "city": "Calais",
-  "country": "France",
-  "faa": "CQF",
-  "geo": {
-    "alt": 12,
-    "coordinates": [
-      1.954764,
-      50.962097
-    ],
-    "type": "Point"
-  },
-  "icao": "LFAC",
-  "id": 1254,
-  "type": "airport",
-  "tz": "Europe/Paris"
-}
-----
-
-== GeoJSON Syntax
-
-As previously discussed the supported GeoJSON shapes in the Search service are:
-
-* *Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection*
-
-The search service follows strict GeoJSON syntax for the above seven (7) standard types:
-
-* GeoJSON position arrays are either [longitude, latitude], or [longitude, latitude, altitude].
-
-**  `However the Search service only supports [longitude, latitude].`
-
-* Right-hand rule winding order as per RFC 7946 GeoJSON recommendations
-
-** LineString and Polygon geometries contain coordinates in an order: lines go in a certain direction, and polygon rings do too.
-
-** The direction of LineString often reflects the direction of something in real life: a GPS trace will go in the direction of movement, or a street in the direction of allowed traffic flows.
-
-** Polygon ring order is undefined in GeoJSON, but there’s a useful default to acquire: the right hand rule. Specifically this means that
-
-*** The exterior ring should be counterclockwise.
-
-*** Interior rings should be clockwise.
-
-In addition to the above shapes, Search also supports a two of additional custom shapes (Couchbase specific) to make the spatial approximations easier for users to utilize:
-
-* *Circle, and Envelope*
-
-The search service follows its own syntax for the above two (2) custom types (see below).
-
-== Supported GeoJSON Data Types
-
-=== Point (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.2[RFC 7946: 3.1.2])
-
-The following specifies a GeoJSON Point field in a document:
-
-[source, json]
-----
-{
- "type": "Point",
- "coordinates": [75.05687713623047,22.53539059204079]
-}
-----
-
-A point is a single geographic coordinate, such as the location of a building or the current position given by any Geolocation API.
-Note : The standard only supports a single way of specifying the coordinates like an array format of longitude followed by latitude. i.e.: [lng, lat].
-
-=== Linestring (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.4[RFC 7946: 3.1.4])
-
-The following specifies a GeoJSON Linestring field in a document:
-
-[source, json]
-----
-{
-   "type": "LineString",
-   "coordinates": [
-[ 77.01416015625, 23.0797317624497],
-[ 78.134765625, 20.385825381874263]
-    ]
-}
-----
-
-A linestring defined by an array of two or more positions. By specifying only two points, the linestring will represent a straight line. Specifying more than two points creates an arbitrary path.
-
-===  Polygon (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.6[RFC 7946: 3.1.6])
-
-The following specifies a GeoJSON Polygon field in a document:
-
-[source, json]
-----
-{
- "type": "Polygon",
- "coordinates": [ [ [ 85.605, 57.207],
-                    [ 86.396, 55.998],
-                    [ 87.033, 56.716],
-                    [ 85.605, 57.207]
-                ] ]
-}
-----
-
-A polygon is defined by a list of a list of points. The first and last points in each (outer) list must be the same (i.e., the polygon must be closed). And the exterior coordinates have to be in Counter Clockwise Order in a polygon. (CCW)
-Polygons with holes are also supported. The holes has to follow Clockwise Order for the boundary vertices.
-For Polygons with a single ring, the ring cannot self-intersect
-NOTE: The CCW order of vertices is strictly mandated for the geoshapes in Couchbase Server and any violation of this requirement would result in unexpected search results.
-
-=== MultiPoint (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.3[RFC 7946: 3.1.3])
-
-The following specifies a GeoJSON Multipoint field in a document:
-
-[source, json]
-----
-{
- "type": "MultiPoint",
- "coordinates": [
-    [ -115.8343505859375, 38.45789034424927],
-    [ -115.81237792968749, 38.19502155795575],
-    [ -120.80017089843749, 36.54053616262899],
-    [ -120.67932128906249, 36.33725319397006]
- ]
-}
-----
-
-=== MultiLineString (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.5[RFC 7946: 3.1.5])
-
-The following specifies a GeoJSON MultiLineString field in a document:
-
-[source, json]
-----
-{
- "type": "MultiLineString",
- "coordinates": [
-    [ [ -118.31726074, 35.250105158],[ -117.509765624, 35.3756141] ],
-    [ [ -118.6962890, 34.624167789],[ -118.317260742, 35.03899204] ],
-    [ [ -117.9492187, 35.146862906], [ -117.6745605, 34.41144164] ]
-]
-}
-----
-
-=== MultiPolygon (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.7[RFC 7946: 3.1.7])
-
-The following specifies a GeoJSON MultiPolygon field in a document:
-
-[source, json]
-----
-{
- "type": "MultiPolygon",
- "coordinates": [
-    [ [ [ -73.958, 40.8003 ], [ -73.9498, 40.7968 ],
-        [ -73.9737, 40.7648 ], [ -73.9814, 40.7681 ],
-        [ -73.958, 40.8003 ] ] ],
-
-
-    [ [ [ -73.958, 40.8003 ], [ -73.9498, 40.7968 ],
-        [ -73.9737, 40.7648 ], [ -73.958, 40.8003 ] ] ]
- ]
-}
-----
-
-=== GeometryCollection (https://www.rfc-editor.org/rfc/rfc7946#section-3.1.8[RFC 7946: 3.1.8])
-
-The following specifies a GeoJSON GeometryCollection field in a document:
-A GeometryCollection has a member with the name "geometries".  The value of "geometries" is an array.  Each element of this array is a GeoJSON Geometry object.    It is possible for this array to be empty.
-
-Unlike the other geometry types described above, a GeometryCollection can be a heterogeneous composition of smaller Geometry objects.  For example, a Geometry object in the shape of a lowercase roman "i" can be composed of one point and one LineString.
-Nested  GeometryCollections are invalid.
-
-[source, json]
-----
-{
- "type": "GeometryCollection",
- "geometries": [
-    {
-      "type": "MultiPoint",
-      "coordinates": [
-         [ -73.9580, 40.8003 ],
-         [ -73.9498, 40.7968 ],
-         [ -73.9737, 40.7648 ],
-         [ -73.9814, 40.7681 ]
-      ]
-    },
-    {
-      "type": "MultiLineString",
-      "coordinates": [
-         [ [ -73.96943, 40.78519 ], [ -73.96082, 40.78095 ] ],
-         [ [ -73.96415, 40.79229 ], [ -73.95544, 40.78854 ] ],
-         [ [ -73.97162, 40.78205 ], [ -73.96374, 40.77715 ] ],
-         [ [ -73.97880, 40.77247 ], [ -73.97036, 40.76811 ] ]
-      ]
-    },
-    {
-      "type" : "Polygon",
-      "coordinates" : [
-    [ [ 0 , 0 ] , [ 3 , 6 ] , [ 6 , 1 ] , [ 0 , 0 ] ],
-    [ [ 2 , 2 ] , [ 3 , 3 ] , [ 4 , 2 ] , [ 2 , 2 ] ]
-    ]
-   }
-]
-}
-----
-
-=== Envelope (Couchbase specific extension)
-
-Envelope type, which consists of coordinates for upper left and lower right points of the shape to represent a bounding rectangle in the format: +++[[minLon, maxLat], [maxLon, minLat]]+++.
-
-[source, json]
-----
-{
-    "type": "envelope",
-    "coordinates": [
-      [72.83, 18.979],
-      [78.508, 17.4555]
-    ]
-}
-----
-
-=== Circle (Couchbase specific extension)
-
-If the user wishes to cover a circular region over earth’s surface, then they could use this shape.
-A  sample circular shape is as below.
-
-[source, json]
-----
-{
- "type": "circle",
- "coordinates": [75.05687713623047,22.53539059204079],
- "radius": "1000m"
-}
-----
-
-Circle is specified over the center point coordinates along with the radius (or distance).
-
-Example formats supported for radius are:
-"5in" , "5inch" , "7yd" , "7yards",  "9ft" , "9feet", "11km", "11kilometers", "3nm" "3nauticalmiles", "13mm" , "13millimeters",  "15cm", "15centimeters", "17mi", "17miles" "19m" or "19meters".
-
-[#specifying-distances]
-== Distances
-
-Multiple unit-types can be used to express the radius (or distance) of the *Circle* type.
-These are listed in the table below, with the strings that specify them in REST queries.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Units | Specify with
-
-| inches
-| `in` or `inch`
-
-| feet
-| `ft` or `feet`
-
-| yards
-| `yd` or `yards`
-
-| miles
-| `mi` or `miles`
-
-| nautical miles
-| `nm` or `nauticalmiles`
-
-| millimeters
-| `mm` or `millimeters`
-
-| centimeters
-| `cm` or `centimeters`
-
-| meters
-| `m` or `meters`
-
-| kilometers
-| `km` or `kilometers`
-
-|===
-
-The integer used to specify the number of units must precede the unit-name, with no space left in-between.
-For example, _five inches_ can be specified either by the string `"5in"`, or by the string `"5inches"`; while _thirteen nautical miles_ can be specified as either `"13nm"` or `"13nauticalmiles"`.
-
-If the unit cannot be determined, the entire string is parsed, and the distance is assumed to be in _meters_.
-
-= Querying the GeoJSON spatial fields
-
-Search primarily supports three types of spatial querying capability across those heterogeneous types of geoshapes indexed this is accomplished via a JSON Query Structure.
-
-== Query Structure:
-
-[source, json]
-----
-{
-  "query": {
-    "field": " << fieldName >> ",
-    "geometry": {
-      "shape": {
-        "type": " << shapeDesc >> ",
-        "coordinates": [[[ ]]]
-      },
-      "relation": " << relation >> "
-    }
-  }
-}
-----
-
-The item *fieldName* is the indexed field to apply the Query Structure against.
-
-The item *shapeDesc* can be any of the 9 types like Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, GeometryCollection, Circle and Envelope.
-
-The item *relation* can be any of the 3 types like intersects , contains and within.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Relation | Result
-
-| INTERSECTS
-| Return all documents whose spatial field intersects the query  geometry.
-
-| CONTAINS
-| Return all documents whose spatial field contains the query geometry
-
-| WITHIN
-| Return all documents whose spatial field is within the query geometry.
-
-|===
-
-
-== Sample Query Structures
-
-=== A point `contains` query
-
-The `contains` query for point returns all the matched documents with shapes that contain the given point in the query.
-
-[source, json]
-----
-{
-  "query": {
-  "field": "<>",
-  "geometry": {
-    "shape": {
-      "type": "point",
-      "coordinates": [75.05687713623047, 22.53539059204079]
-    },
-    "relation": "contains"
-    }
-  }
-}
-----
-
-
-=== LineString `intersects` query
-
-An `intersect` query for linestring returns all the matched documents with shapes that intersects with the linestring in the query.
-
-[source, json]
-----
-{
-  "query": {
-  "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "linestring",
-        "coordinates": [
-          [77.01416015625, 23.079731762449878],
-          [78.134765625, 20.385825381874263]
-        ]
-      },
-      "relation": "intersects"
-    }
-  }
-}
-----
-
-
-=== Polygon `WithIn` Query
-
-A `within` query for polygon returns all the matched documents with shapes that are residing completely within the area of the polygon in the query.
-
-[source, json]
-----
-{
-  "query": {
-  "field": "<>",
-    "geometry": {
-      "shape": {
-        "type": "polygon",
-        "coordinates": [
-          [
-            [77.59012699127197, 12.959853852513307],
-            [77.59836673736572, 12.959853852513307],
-            [77.59836673736572, 12.965541604118611],
-            [77.59012699127197, 12.965541604118611],
-            [77.59012699127197, 12.959853852513307]
-          ]
-        ]
-      },
-      "relation": "within"
-    }
-  }
-}
-----
-
-[#detailed-geojson-examples]
-
-== Detailed examples for every QueryShape:
-
-The Sample Query Structures above just introduces some of the basic QueryShapes, the full list below covers the nine (9) unique QueryShapes utilizes each of them to query 1) a set of GeoJSON points and 2) a set of GeoJSON area shapes in this case Circles (but the "area shapes" could be anything):
-
-* xref:fts-queryshape-point.adoc[Point Query]
-* xref:fts-queryshape-linestring.adoc[LineString Query]
-* xref:fts-queryshape-polygon.adoc[Polygon Query]
-* xref:fts-queryshape-multipoint.adoc[MultiPoint Query]
-* xref:fts-queryshape-multilinestring.adoc[MultiLineString Query]
-* xref:fts-queryshape-multipolygon.adoc[MultiPolygon Query]
-* xref:fts-queryshape-geometrycollection.adoc[GeometryCollection Query]
-* xref:fts-queryshape-circle.adoc[Circle Query]
-* xref:fts-queryshape-envelope.adoc[Envelope Query]
-
-
-[#creating_a_geojson_index]
-== Creating a Geospatial Index (type geojson)
-
-To be successful, a geospatial GeoJSON query must reference an index that applies the _geojson_ type mapping to the field containing any of the standard types *Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection* plus the extended types of *Circle and Envelope*.
-
-This can be achieved with Couchbase Web Console, or with the REST endpoints provided for managing xref:rest-api:rest-fts-indexing.adoc[Indexes].
-Detailed instructions for setting up indexes, and specifying type mappings, are provided in xref:fts-creating-indexes.adoc[Creating Indexes].
-
-include::partial$fts-creating-geojson-common.adoc[]
-
-The index once created can also be accessed by means of the Search REST API
-see xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API].  Furthermore the index could have been created in the first place via the Search REST API see xref:fts-creating-index-with-rest-api.adoc[Index Creation with REST API] for more information on using the Search REST API syntax.
-
-[#creating_geojson_rest_query_radius_based]
-== Creating a Query: Radius-Based
-
-This section and those following, provide examples of the query-bodies required to make geospatial queries with the Couchbase REST API.
-Note that more detailed information on performing queries with the Couchbase REST API can be found in xref:fts-searching-with-the-rest-api.adoc[Searching with the REST API]; which shows how to use the full `curl` command and how to incorporate query-bodies into it.
-
-The following query-body specifies a longitude of `-2.235143` and a latitude of `53.482358`.
-The target-field `geo` is specified, as is a `distance` of `100` miles: this is the radius within which target-locations must reside for their documents to be returned.
-
-[source, json]
-----
-{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "within"
-    },
-    "field": "geojson"
-  },
-  "size": 10,
-  "from": 0,
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geojson",
-      "unit": "mi",
-      "location": {
-        "lon": -2.235143,
-        "lat": 53.482358
-      }
-    }
-  ]
-}
-----
-
-The above query contains a `sort` object, which specifies that the returned documents should be ordered in terms of their _geo_distance_ from specified `lon` and `lat` coordinates: these values need not be identical to those specified in the `query` object.
-
-image::fts-geojson-search_01.png[,550,align=left]
-You can cut-n-paste the above Search body definition into the text area that says "search this index..."
-
-image::fts-geojson-search_02.png[,550,align=left]
-Once pasted hit the *Search* button and the UI will show the first 10 hits
-
-image::fts-geojson-search_03.png[,,align=left]
-The console allows searches performed via the UI to be translated dynamically into cURL examples.
-To create a cURL command to do this first check *[X] show advanced query settings* and then check *[X] show command-line query example*.
-
-You should have a cURL command similar to the following:
-
-[source, console]
-----
-curl -XPOST -H "Content-Type: application/json" \
--u : http://192.168.3.150:8094/api/index/test_geojson/query \
--d '{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          -2.235143,
-          53.482358
-        ],
-        "type": "circle",
-        "radius": "100mi"
-      },
-      "relation": "within"
-    },
-    "field": "geojson"
-  },
-  "size": 10,
-  "from": 0,
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geojson",
-      "unit": "mi",
-      "location": {
-        "lon": -2.235143,
-        "lat": 53.482358
-      }
-    }
-  ]
-}'
-----
-
-If you copy and then run the above cURL command via the console the response from the Search service will report that there are 847 total_hits but only return the first 10 hits.  A subset of formatted console output might appear as follows:
-
-NOTE: To pretty print the response just pipe the output through the utility http://stedolan.github.io/jq[jq] to enhance readability.
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_3604",
-    "score": 0.21532348857041025,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_5571",
-    "score": 0.12120554320433605,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_3577",
-    "score": 0.2153234885704102,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "hotel_3606",
-    "score": 0.2153234885704102,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_40167",
-    "score": 0.27197802451106445,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_36152",
-    "score": 0.12120554320433605,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_11329",
-    "score": 0.12120554320433605,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "hotel_3643",
-    "score": 0.2153234885704102,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "landmark_40038",
-    "score": 0.27197802451106445,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  },
-  {
-    "index": "test_geojson_690ac8f8179a4a86_4c1c5584",
-    "id": "airport_565",
-    "score": 0.12120554320433605,
-    "sort": [
-      " \u0001\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f\u007f"
-    ]
-  }
-]
-----
-
-
-[#creating_geojson_rest_query_bounding_box_based]
-== Creating a Query: Envelope (or Rectangle-Based)
-
-In the following query-body, the `top_left` of a rectangle is expressed by means of an *Envelope*, specifying  +++[[minLon, maxLat], [maxLon, minLat]] = [[-2.235143, 53.482358],[28.955043, 40.991862]]+++
-
-The results are specified to be sorted on `name` alone, since only type hotel and landmark have a name the sort will occur on the tokenized values (they analyzer would need to be type keyword to sort on the actual field names).
-
-[source, json]
-----
-{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          [-2.235143, 53.482358],
-          [28.955043, 40.991862]
-        ],
-        "type": "envelope"
-      },
-      "relation": "within"
-    },
-    "field": "geojson"
-  },
-  "sort": ["name"],
-  "size": 10,
-  "from": 0
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geojson_3dd53eb1ac88768c_4c1c5584",
-    "id": "landmark_3604",
-    "score": 0.004703467956838207,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-  {
-    "index": "test_geojson_3dd53eb1ac88768c_4c1c5584",
-    "id": "landmark_6067",
-    "score": 0.004703467956838207,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-  {
-    "index": "test_geojson_3dd53eb1ac88768c_4c1c5584",
-    "id": "landmark_16320",
-    "score": 0.004703467956838207,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-----
-
-If we added two (2) more child field into the Index definition as follows where both items are searchable as "name"
-
-image::fts-geojson-mod-index.png[,550,align=left]
-
-The sort would be on the actual airportname and name fields and the query itself would return these values.
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "hotel_1364",
-    "score": 0.05896334942635901,
-    "sort": [
-      "'La Mirande Hotel"
-    ]
-  },
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "landmark_16144",
-    "score": 0.004703467956838207,
-    "sort": [
-      "02 Shepherd's Bush Empire"
-    ]
-  },
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "landmark_16181",
-    "score": 0.004703467956838207,
-    "sort": [
-      "2 Willow Road"
-    ]
-  },
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "landmark_16079",
-    "score": 0.004703467956838207,
-    "sort": [
-      "20 Fenchurch Street"
-    ]
-  },
-----
-
-
-[#creating_geojson_rest_query_polygon_based]
-== Creating a Query: Polygon-Based
-
-The following query-body uses an array, each of whose elements is a string, containing multiple floating-point number pairs; to specify the longitude and latitude of each of the lon/lat pairs of a polygon — known as _polygon points_.
-In all cases, the `lon` floating-point value precedes the `lat` for the correct GeoJSON winding.
-
-Here, the last-specified pair in the array is identical to the initial pair, thus explicitly closing the polygon.
-However, specifying an explicit closure in this way is optional: closure will be inferred by Couchbase Server if not explicitly specified.
-
-If a target data-location falls within the polygon, its document is returned.
-
-Request the first 10 items within the state of Utah (note the query body consists of simple set of hand drawn set of corner points).
-The target-field `geojson` is specified, to be compared to the query Polygon the target-locations must reside for their documents to be returned.
-Don't worry about newlines when you paste the text.
-
-The results are specified to be sorted on `name` alone, since only type hotel and landmark have a name the sort will occur on the tokenized values (they analyzer would need to be type keyword to sort on the actual field names).
-
-
-[source, json]
-----
-{
-  "query": {
-    "geometry": {
-      "shape": {
-        "coordinates": [
-          [
-            [-114.027099609375, 42.00848901572399],
-            [-114.04907226562499, 36.99377838872517],
-            [-109.05029296875, 36.99377838872517],
-            [-109.05029296875, 40.98819156349393],
-            [-111.060791015625, 40.98819156349393],
-            [-111.02783203125, 42.00848901572399],
-            [-114.027099609375, 42.00848901572399]
-          ]
-        ],
-        "type": "Polygon"
-      },
-      "relation": "within"
-    },
-    "field": "geojson"
-  },
-  "size": 10,
-  "from": 0,
-  "sort": ["name"]
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source,json]
-----
-"hits": [
-  {
-    "index": "test_geojson_4330cb585620d5e8_4c1c5584",
-    "id": "airport_7857",
-    "score": 0.27669394470240527,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-  {
-    "index": "test_geojson_4330cb585620d5e8_4c1c5584",
-    "id": "airport_7581",
-    "score": 0.13231342774148913,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-  {
-    "index": "test_geojson_4330cb585620d5e8_4c1c5584",
-    "id": "airport_7727",
-    "score": 0.27669394470240527,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-  {
-    "index": "test_geojson_4330cb585620d5e8_4c1c5584",
-    "id": "airport_9279",
-    "score": 0.27669394470240527,
-    "sort": [
-      "ô¿¿ô¿¿ô¿¿"
-    ]
-  },
-----
-
-Again if we added two (2) more child field into the Index definition as follows where both items are searchable as "name"
-
-image::fts-geojson-mod-index.png[,550,align=left]
-
-The sort would be on the actual airportname and name fields (but there are only airports returned) and the query itself would return these values.
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "airport_6999",
-    "score": 0.13231342774148913,
-    "sort": [
-      "Brigham City"
-    ]
-  },
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "airport_7857",
-    "score": 0.27669394470240527,
-    "sort": [
-      "Bryce Canyon"
-    ]
-  },
-  {
-    "index": "test_geojson_4391b0a68d5cc865_4c1c5584",
-    "id": "airport_7074",
-    "score": 0.13231342774148913,
-    "sort": [
-      "Canyonlands Field"
-    ]
-  },
-----
-
-This example quickly creates the same index as xref:fts-creating-index-from-REST-geojson.adoc[Creating a GeoJSON Index via the REST API].  Note it has the two (2) additional child field definitions to allow keyword sorting.
-
-[#final-geojson-index]
-
-== Final GeoJSON Search index
-
-Note, for the samples above that return actual airportname and name fields and also the nine (9) QueryShape examples referenced in <>
-
-the Search index used is as follows:
-
-[source, json]
-----
-{
-  "type": "fulltext-index",
-  "name": "test_geojson",
-  "sourceType": "gocbcore",
-  "sourceName": "travel-sample",
-  "planParams": {
-    "maxPartitionsPerPIndex": 1024,
-    "indexPartitions": 1
-  },
-  "params": {
-    "doc_config": {
-      "docid_prefix_delim": "",
-      "docid_regexp": "",
-      "mode": "scope.collection.type_field",
-      "type_field": "type"
-    },
-    "mapping": {
-      "analysis": {},
-      "default_analyzer": "standard",
-      "default_datetime_parser": "dateTimeOptional",
-      "default_field": "_all",
-      "default_mapping": {
-        "dynamic": true,
-        "enabled": false
-      },
-      "default_type": "_default",
-      "docvalues_dynamic": false,
-      "index_dynamic": true,
-      "store_dynamic": false,
-      "type_field": "_type",
-      "types": {
-        "_default._default": {
-          "dynamic": true,
-          "enabled": true,
-          "properties": {
-            "airportname": {
-              "dynamic": false,
-              "enabled": true,
-              "fields": [
-                {
-                  "analyzer": "keyword",
-                  "include_in_all": true,
-                  "index": true,
-                  "name": "name",
-                  "store": true,
-                  "type": "text"
-                }
-              ]
-            },
-            "geoarea": {
-              "dynamic": false,
-              "enabled": true,
-              "fields": [
-                {
-                  "include_in_all": true,
-                  "index": true,
-                  "name": "geoarea",
-                  "type": "geoshape"
-                }
-              ]
-            },
-            "geojson": {
-              "dynamic": false,
-              "enabled": true,
-              "fields": [
-                {
-                  "include_in_all": true,
-                  "index": true,
-                  "name": "geojson",
-                  "type": "geoshape"
-                }
-              ]
-            },
-            "name": {
-              "dynamic": false,
-              "enabled": true,
-              "fields": [
-                {
-                  "analyzer": "keyword",
-                  "include_in_all": true,
-                  "index": true,
-                  "name": "name",
-                  "store": true,
-                  "type": "text"
-                }
-              ]
-            }
-          }
-        }
-      }
-    },
-    "store": {
-      "indexType": "scorch",
-      "segmentVersion": 15
-    }
-  },
-  "sourceParams": {}
-}
-----
-
-If viewed in the UI:
-
-image::fts-geojson-mod-index-full.png[,600,align=left]
diff --git a/modules/fts/pages/fts-supported-queries-geopoint-spatial.adoc b/modules/fts/pages/fts-supported-queries-geopoint-spatial.adoc
deleted file mode 100644
index 075cfd5bb1..0000000000
--- a/modules/fts/pages/fts-supported-queries-geopoint-spatial.adoc
+++ /dev/null
@@ -1,585 +0,0 @@
-= Geospatial Geopoint Queries
-:page-aliases: fts-supported-queries-geo-spatial.adoc
-
-[abstract]
-_Geospatial_ geopoint queries return documents that contain location. Each document specifies a geographical location.
-
-A _geospatial geopoint query_ specifies an area and returns each document that contains a reference to a location within the area.
-Areas and locations are represented by  _latitude_-_longitude_ coordinate pairs.
-
-The location data provided by a geospatial geopoint query can be any of the following:
-
-* A location, is specified as a longitude-latitude coordinate pair; and a distance.
-The location determines the center of a circle whose radius-length is the specified distance.
-Documents are returned if they reference a location within the circle. For details of the units and formats in which distances can be specified, see xref:fts:fts-supported-queries-geo-spatial.adoc#specifying-distances[Specifying Distances].
-
-* Two latitude-longitude coordinate pairs.
-These are respectively taken to indicate the top left and bottom right corners of a _rectangle_.
-Documents are returned if they reference a location within the area of the rectangle.
-
-* An array of three or more latitude-longitude coordinate pairs.
-Each of the pairs is taken to indicate one corner of a _polygon_.
-Documents are returned if they reference a location within the area of the polygon.
-
-A geospatial geopoint query must be applied to an index that uses the _geopoint_ type mapping to the document-field that contains the target longitude-latitude coordinate pair.
-
-To be successful, a geospatial geopoint query must reference an index within which the _geopoint_ type mapping has been applied to the field containing the target latitude-longitude coordinate pair.
-
-Geospatial queries return _all_ documents whose locations are within the query-specified area.
-To specify _holes_ within the area so that one or more subsets of returned documents can be omitted from the final results, boolean queries should be applied to the set of documents returned by the geospatial geopoint query.
-See xref:fts-supported-queries.adoc[Supported Queries].
-
-Latitude-longitude coordinate pairs can be specified in multiple ways, including as _geohashes_; as demonstrated in xref:fts:fts-supported-queries-geo-spatial.adoc#specifying-coordinates[Specifying Coordinates], below.
-
-[#recognizing_target_data]
-== Recognizing Target Data
-
-The `travel-sample` bucket, provided for test and development, contains multiple documents that specify locations.
-For example, those that represent airports, such as `airport_1254`:
-
-[source, json]
-----
-{
-  "airportname": "Calais Dunkerque",
-  "city": "Calais",
-  "country": "France",
-  "faa": "CQF",
-  "geo": {
-    "alt": 12,
-    "lat": 50.962097,
-    "lon": 1.954764
-  },
-  "icao": "LFAC",
-  "id": 1254,
-  "type": "airport",
-  "tz": "Europe/Paris"
-}
-----
-
-The `geo` field contains the `lon` and `lat` key-value pairs.
-Note that the `geo` field needs to contain the longitude-latitude information in the form of a string (comma separated numeric content or a hash), array, or object.
-
-* String syntax: `"lat,lon"`, `"geohash"`
-* Array syntax: `[lon, lat]` (where lon and lat are both floating point integers)
-* Object syntax: `{"lon" : 0.0, "lat": 0.0}`, `{"lng": 0.0, "lat": 0.0}` (note that these are the only accepted field names for longitude and latitude)
-
-Moreover, any other child-fields, such as `alt` (in the above example) - are ignored.
-
-For information on installing the `travel-sample` bucket, see xref:manage:manage-settings/install-sample-buckets.adoc[Sample Buckets].
-
-[#specifying-coordinates]
-=== Specifying Coordinates
-
-Each latitude-longitude coordinate can be expressed by means of any of the following.
-
-[#two-key-value-pairs]
-==== Two Key-Value Pairs
-
-An individual latitude-longitude coordinate can be expressed by means of an object containing two key-value pairs.
-For example, the central location for a radius-based area can be expressed as follows:
-
-[source, json]
-----
-"location": {
-       "lon": -2.235143,
-       "lat": 53.482358
-     }
-----
-
-Where multiple coordinates are required, for the specifying of a polygon, an array of such objects can be specified, as follows:
-
-[source, json]
-----
-"polygon_points": [
-  { “lat”: 37.79393211306212, “lon”: -122.44234633404847 },
-  { “lat”: 37.77995881733997, “lon”: -122.43977141339417 },
-  { “lat”: 37.788031092020155, “lon”: -122.4292571540557 },
-  { “lat”: 37.79026946582319, “lon”: -122.41149020154114 },
-  { “lat”: 37.79571192027403, “lon”: -122.40735054016113 },
-  { “lat”: 37.79393211306212, “lon”: -122.44234633404847 }
-]
-----
-
-[#a-string-containing-two-floating-point-numbers]
-==== A String, Containing Two Floating-Point Numbers
-
-An individual latitude-longitude coordinate can be expressed as a string containing two floating-point numbers — the first signifying latitude, the second longitude.
-For example, the center of a circle can be specified as follows:
-
-[source, json]
-----
-"location": "53.482358,-2.235143"
-----
-
-Where multiple coordinates are required, for the specifying of a polygon, an array of such strings can be specified, as follows:
-
-[source, json]
-----
-"polygon_points": [
-  "37.79393211306212,-122.44234633404847",
-  "37.77995881733997,-122.43977141339417",
-  "37.788031092020155,-122.42925715405579",
-  "37.79026946582319,-122.41149020154114",
-  "37.79571192027403,-122.40735054016113",
-  "37.79393211306212,-122.44234633404847"
-]
-----
-
-[#an-array-of-floating-point-numbers]
-==== An Array of Two Floating-Point Numbers
-
-An individual latitude-longitude coordinate can be expressed as an array of two floating-point numbers — the first signifying longitude, the second latitude.
-For example, the top left corner of a rectangle can be specified as follows:
-
-[source,javascript]
-----
-"top_left": [ -2.235143, 53.482358 ]
-----
-
-Where multiple coordinates are required, for the specifying of a polygon, an array of such arrays can be specified, as follows:
-
-[source, json]
-----
-"polygon_points": [
-  [ -122.44234633404847, 37.79393211306212 ],
-  [ -122.43977141339417, 37.77995881733997 ],
-  [ -122.42925715405579, 37.78803109202015 ],
-  [ -122.41149020154114, 37.79026946582319 ],
-  [ -122.40735054016113, 37.79571192027403 ],
-  [ -122.44234633404847, 37.79393211306212 ]
-]
-----
-
-[#a-geohash]
-==== A Geohash
-
-A latitude-longitude coordinate can be expressed by means of a single https://en.wikipedia.org/wiki/Geohash[Geohash] encoding.
-For example, the bottom right corner of a rectangle can be specified as follows:
-
-[source, json]
-----
-"bottom_right": "gcw2m0hmm6hs"
-----
-
-Where multiple coordinates are required, for the specifying of a polygon, an array of geohashes can be specified, as follows:
-
-[source, json]
-----
-"polygon_points": [
-  “9q8zjbkp”,
-  “9q8yvvdh”,
-  “9q8yyp1e”,
-  “9q8yyrw8”,
-  “9q8zn83x”,
-  “9q8zjb0j”
-]
-----
-
-Means of latitude-longitude conversion to and from this format are provided at http://geohash.co/[Geohash Converter].
-Additional information, including on the _precision_ of values specified in this format, is provided at https://www.movable-type.co.uk/scripts/geohash.html[Movable Type Scripts — Geohashes].
-
-[#specifying-distances]
-=== Specifying Distances
-
-Multiple unit-types can be used to express distance.
-These are listed in the table below, with the strings that specify them in REST queries.
-
-[#geospatial-distance-units,cols="1,2"]
-|===
-| Units | Specify with
-
-| inches
-| `in` or `inch`
-
-| feet
-| `ft` or `feet`
-
-| yards
-| `yd` or `yards`
-
-| miles
-| `mi` or `miles`
-
-| nautical miles
-| `nm` or `nauticalmiles`
-
-| millimeters
-| `mm` or `millimeters`
-
-| centimeters
-| `cm` or `centimeters`
-
-| meters
-| `m` or `meters`
-
-| kilometers
-| `km` or `kilometers`
-
-|===
-
-The integer used to specify the number of units must precede the unit-name, with no space left in-between.
-For example, _five inches_ can be specified either by the string `"5in"`, or by the string `"5inches"`; while _thirteen nautical miles_ can be specified as either `"13nm"` or `"13nauticalmiles"`.
-
-If the unit cannot be determined, the entire string is parsed, and the distance is assumed to be in _meters_.
-
-[#creating_a_geospatial_index]
-[#creating_a_geospatial_geopoint_index]
-== Creating a Geospatial Index (type geopoint)
-
-To be successful, a geospatial geopoint query must reference an index that applies the _geopoint_ type mapping to the field containing the latitude-longitude coordinate pair.
-This can be achieved with Couchbase Web Console, or with the REST endpoints provided for managing xref:rest-api:rest-fts-indexing.adoc[Indexes].
-Detailed instructions for setting up indexes, and specifying type mappings, are provided in xref:fts-creating-indexes.adoc[Creating Indexes].
-
-For initial experimentation with geospatial geopoint querying (based on the type geopoint), the `geo` field of documents within the `travel-sample` bucket can be specified as a child field of the `default` type mapping (keyspace `travel-sample._default._default` as follows:
-
-include::partial$fts-creating-geopoint-common.adoc[]
-
-The index once created can also be accessed by means of the Search REST API
-see xref:fts-searching-with-curl-http-requests.adoc[Searching with the REST API].  Furthermore the index could have been created in the first place via the Search REST API see xref:fts-creating-index-with-rest-api.adoc[Index Creation with REST API] for more information on using the Search REST API syntax.
-
-[#creating_geospatial_rest_query_radius_based]
-[#creating_geopoint_rest_query_radius_based]
-== Creating a Query: Radius-Based
-
-This section and those following, provide examples of the query-bodies required to make geospatial queries with the Couchbase REST API.
-Note that more detailed information on performing queries with the Couchbase REST API can be found in xref:fts-searching-with-the-rest-api.adoc[Searching with the REST API]; which shows how to use the full `curl` command and how to incorporate query-bodies into it.
-
-The following query-body specifies a longitude of `-2.235143` and a latitude of `53.482358`.
-The target-field `geo` is specified, as is a `distance` of `100` miles: this is the radius within which target-locations must reside for their documents to be returned.
-
-[source, json]
-----
-{
-  "from": 0,
-  "size": 10,
-  "query": {
-    "location": {
-      "lon": -2.235143,
-      "lat": 53.482358
-     },
-      "distance": "100mi",
-      "field": "geo"
-    },
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geo",
-      "unit": "mi",
-      "location": {
-      "lon": -2.235143,
-      "lat": 53.482358
-      }
-    }
-  ]
-}
-----
-
-The above query contains a `sort` object, which specifies that the returned documents should be ordered in terms of their _geo_distance_ from specified `lon` and `lat` coordinates: these values need not be identical to those specified in the `query` object.
-
-image::fts-geopoint-search_01.png[,550,align=left]
-You can cut-n-paste the above Search body definition into the text area that says "search this index..."
-
-image::fts-geopoint-search_02.png[,550,align=left]
-Once pasted hit the *Search* button and the UI will show the first 10 hits
-
-image::fts-geopoint-search_03.png[,,align=left]
-The console allows searches performed via the UI to be translated dynamically into cURL examples.
-To create a cURL command to do this first check *[X] show advanced query settings* and then check *[X] show command-line query example*.
-
-You should have a cURL command similar to the following:
-
-[source, console]
-----
-curl -XPOST -H "Content-Type: application/json" \
--u : http://localhost:8094/api/index/test_geopoint/query \
--d '{
-  "from": 0,
-  "size": 10,
-  "query": {
-    "location": {
-      "lon": -2.235143,
-      "lat": 53.482358
-    },
-    "distance": "100mi",
-    "field": "geo"
-  },
-  "sort": [
-    {
-      "by": "geo_distance",
-      "field": "geo",
-      "unit": "mi",
-      "location": {
-        "lon": -2.235143,
-        "lat": 53.482358
-      }
-    }
-  ]
-}'
-----
-
-If you copy and then run the above cURL command via the console the response from the Search service will report that there are 847 total_hits but only return the first 10 hits.  A subset of formatted console output might appear as follows:
-
-NOTE: To pretty print the response just pipe the output through the utility *http://stedolan.github.io/jq[jq]* to enhance readability.
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_17411",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?E#9>N\f\"e"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_17409",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?O~i*(kD,"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_17403",
-    "score": 0.025840756648257503,
-    "sort": [
-      " \u0001?Sg*|/t\u001f\u0002"
-    ]
-  }
-]
-----
-
-[#creating_geospatial_rest_query_bounding_box_based]
-[#creating_geoppoint_rest_query_bounding_box_based]
-== Creating a Query: Rectangle-Based
-
-In the following query-body, the `top_left` of a rectangle is expressed by means of an array of two floating-point numbers, specifying a longitude of `-2.235143` and a latitude of `53.482358`.
-The `bottom_right` is expressed by means of key-value pairs, specifying a longitude of `28.955043` and a latitude of `40.991862`.
-The results are specified to be sorted on `name` alone.
-
-[source, json]
-----
-{
-  "from": 0,
-  "size": 10,
-  "query": {
-    "top_left": [-2.235143, 53.482358],
-    "bottom_right": {
-      "lon": 28.955043,
-      "lat": 40.991862
-     },
-    "field": "geo"
-  },
-  "sort": [
-    "name"
-  ]
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source, json]
-----
-"hits": [
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_16144",
-    "score": 0.004836809397039384,
-    "sort": [
-      "02"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "hotel_9905",
-    "score": 0.01625607942050202,
-    "sort": [
-      "1"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "hotel_16460",
-    "score": 0.004836809397039384,
-    "sort": [
-      "11"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "hotel_21674",
-    "score": 0.010011952055063241,
-    "sort": [
-      "17"
-    ]
-  }
-]
-----
-
-[#creating_geospatial_rest_query_polygon_based]
-[#creating_geopoint_rest_query_polygon_based]
-
-== Creating a Query: Polygon-Based
-
-The following query-body uses an array, each of whose elements is a string, containing two floating-point numbers; to specify the latitude and longitude of each of the corners of a polygon — known as _polygon points_.
-In each string, the `lat` floating-point value precedes the `lon.`
-
-Here, the last-specified string in the array is identical to the initial string, thus explicitly closing the box.
-However, specifying an explicit closure in this way is optional: closure will be inferred by Couchbase Server if not explicitly specified.
-
-If a target data-location falls within the box, its document is returned.
-The results are specified to be sorted on `name` alone.
-
-[source, json]
-----
-{
-  "query": {
-    "field": "geo",
-    "polygon_points": [
-      "37.79393211306212,-122.44234633404847",
-      "37.77995881733997,-122.43977141339417",
-      "37.788031092020155,-122.42925715405579",
-      "37.79026946582319,-122.41149020154114",
-      "37.79571192027403,-122.40735054016113",
-      "37.79393211306212,-122.44234633404847"
-    ]
-  },
-  "sort": [
-    "name"
-  ]
-}
-----
-
-A subset of formatted output might appear as follows:
-
-[source,json]
-----
-"hits": [
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_25944",
-    "score": 0.23634379439298683,
-    "sort": [
-      "4"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_25681",
-    "score": 0.31367419004657393,
-    "sort": [
-      "alta"
-    ]
-  },
-  {
-    "index": "test_geopoint_7d088ca77bbecbe2_4c1c5584",
-    "id": "landmark_25686",
-    "score": 0.31367419004657393,
-    "sort": [
-      "atherton"
-    ]
-  }
-]
-----
-
-NOTE: When we sort on a string that uses the default analyzer that string is tokenized and you may get unexpected results as you are sorting on the tokenized field.  If you want to sort on the actual text in the field should use the *analyzer: "keyword"* to sort by the original text in the field.  In addition if you want to include the keyword in the index itself you will need to check *[X] store* or check *[X] docvalues*.
-
-== Sorting by Keywords
-
-To sort by the actual names we need to take into account that for type="airport" has a field called "airportname" and for type="landmark" has a field called "name".
-
-By inserting editing the index and inserting two more child fields as follows:
-
-* for type="airport"
-+
-image::fts-geopoint-update1.png[,600,align=left]
-
-* for type="landmark"
-+
-image::fts-geopoint-update2.png[,600,align=left]
-
-* Click the *Update Index* button
-
-If you look carefully above both the actual fields "airportname" and "name" will be searchable as name.
-
-At this point if you Edit the index again the complete definition should look like: 
-
-image::fts-geopoint-updated-index.png[,600,align=left]
-
-Now repeating the above "Polygon-Based" Query we see that the data is sorted based on the original field names.
-
-[source,json]
-----
-"hits": [
-  {
-    "index": "test_geopoint_6e91b22c20945813_4c1c5584",
-    "id": "landmark_25681",
-    "score": 0.31367419004657393,
-    "sort": [
-      "Alta Plaza Park"
-    ]
-  },
-  {
-    "index": "test_geopoint_6e91b22c20945813_4c1c5584",
-    "id": "landmark_25686",
-    "score": 0.31367419004657393,
-    "sort": [
-      "Atherton House"
-    ]
-  },
-  {
-    "index": "test_geopoint_6e91b22c20945813_4c1c5584",
-    "id": "landmark_25944",
-    "score": 0.23634379439298683,
-    "sort": [
-      "Big 4 Restaurant"
-    ]
-  },
-  {
-    "index": "test_geopoint_6e91b22c20945813_4c1c5584",
-    "id": "landmark_25739",
-    "score": 0.31367419004657393,
-    "sort": [
-      "Blu"
-    ]
-  },
-  {
-    "index": "test_geopoint_6e91b22c20945813_4c1c5584",
-    "id": "landmark_36047",
-    "score": 0.25593551041769463,
-    "sort": [
-      "Cable Car Museum"
-    ]
-  },
-----
-
-Finally since we checked *[X] store* for the child mappings for both "airportname" and "name" we modify the above “Polygon-Based” by adding *"fields": ["name"],* and then run it in the UI.
-
-[source, json]
-----
-{
-  "fields": ["name"],
-  "query": {
-    "field": "geo",
-    "polygon_points": [
-      "37.79393211306212,-122.44234633404847",
-      "37.77995881733997,-122.43977141339417",
-      "37.788031092020155,-122.42925715405579",
-      "37.79026946582319,-122.41149020154114",
-      "37.79571192027403,-122.40735054016113",
-      "37.79393211306212,-122.44234633404847"
-    ]
-  },
-  "sort": [
-    "name"
-  ]
-}
-----
-
-Copy and paste the above into the UI's index search text box the result will be as follows:
-
-image::fts-geopoint-updated-index-seach-stored.png[,,align=left]
-
-Because we added *"fields": ["name"],* and the fields "airportname" and "name" were specified to be stored the index returns the actual value (the mapped name of name) both in the UI.  If we passed the new query body to cURL the value will also be returned via the REST call.
diff --git a/modules/fts/pages/fts-supported-queries-geospatial.adoc b/modules/fts/pages/fts-supported-queries-geospatial.adoc
deleted file mode 100644
index ae2c9c8e4e..0000000000
--- a/modules/fts/pages/fts-supported-queries-geospatial.adoc
+++ /dev/null
@@ -1,16 +0,0 @@
-= Geospatial Queries
-
-[abstract]
-_Geospatial_ queries return documents that contain location in either legacy Geopoint format or standard GeoJSON structures.
-
-== Geopoint (type geopoint)
-
-Legacy or Geopoint documents which specify a geographical location. 
-
-For these queries the Search service lets users index the single dimensional geopoint/location fields and perform various bounding queries like point-distance, bounded-rectangle and bounded-polygon against the indexed geopoint field.  
-
-For higher level shapes and structures refer to GeoJSON below.
-
-== GeoJSON (type geojson) 
-
-include::partial$fts-geojson-intro-common.adoc[]
diff --git a/modules/fts/pages/fts-supported-queries-match-all.adoc b/modules/fts/pages/fts-supported-queries-match-all.adoc
deleted file mode 100644
index 275ff29c39..0000000000
--- a/modules/fts/pages/fts-supported-queries-match-all.adoc
+++ /dev/null
@@ -1,9 +0,0 @@
-= Match All Query
-
-Matches _all_ documents in an index, irrespective of terms.
-For example, if an index is created on the `travel-sample` bucket for documents of type `zucchini`, the _match all_ query returns all document IDs from the `travel-sample` bucket, even though the bucket contains no documents of type `zucchini`.
-
-[source,json]
-----
-{ "match_all": {} }
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-match-none.adoc b/modules/fts/pages/fts-supported-queries-match-none.adoc
deleted file mode 100644
index 1ef2ef25a8..0000000000
--- a/modules/fts/pages/fts-supported-queries-match-none.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-= Match None Query
-
-Matches no documents in the index.
-
-[source,json]
-----
-{ "match_none": {} }
-----
diff --git a/modules/fts/pages/fts-supported-queries-match-phrase.adoc b/modules/fts/pages/fts-supported-queries-match-phrase.adoc
deleted file mode 100644
index 8ece9a16cc..0000000000
--- a/modules/fts/pages/fts-supported-queries-match-phrase.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-= Match Phrase Query
-
-The input text is analyzed, and a phrase query is built with the terms resulting from the analysis.
-This type of query searches for terms in the target that occur _in the positions and offsets indicated by the input_: this depends on _term vectors_, which must have been included in the creation of the index used for the search.
-
-For example, a match phrase query for `location for functions` is matched with `locate the function`, if the standard analyzer is used: this analyzer uses a _stemmer_, which tokenizes `location` and `locate` to `locat`, and reduces `functions` and `function` to `function`.
-Additionally, the analyzer employs _stop_ removal, which removes small and less significant words from input and target text, so that matches are attempted on only the more significant elements of vocabulary: in this case  `for` and `the` are removed.
-Following this processing, the tokens `locat` and `function` are recognized as _common to both input and target_; and also as being both _in the same sequence as_, and _at the same distance from_ one another; and therefore a match is made.
-
-== Example
-
-The following JSON object demonstrates specification of a match phrase query:
-
-
-[source,json]
-----
-{
- "match_phrase": "very nice",
- "field": "reviews.content"
-}
-----
-
-A demonstration of the match phrase query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-match.adoc b/modules/fts/pages/fts-supported-queries-match.adoc
deleted file mode 100644
index 1df43464e3..0000000000
--- a/modules/fts/pages/fts-supported-queries-match.adoc
+++ /dev/null
@@ -1,40 +0,0 @@
-[#Match-Query]
-= Match Query
-
-A term without any other syntax is interpreted as a match query for the term in the default field. The default field is `_all`.
-
-For example, `pool` performs match query for the term `pool`.
-
-A match query _analyzes_ input text and uses the results to query an index. Options include specifying an analyzer, performing a _fuzzy match_, and performing a _prefix match_.
-
-By default, the analyzer used for the search text is what was set for the specified field during index creation. For information on analyzers, see xref:fts-analyzers.adoc[Understanding Analyzers].
-
-NOTE: If the field is not specified, the match query will target the `_all` field within the index. Including content within the `_all` field is a setting during index creation.
-
-When fuzzy matching is used, if the single parameter is set to a non-zero integer, the analyzed text is matched with a corresponding level of fuzziness. The maximum supported fuzziness is 2.
-
-When a prefix match is used, the [.param]`prefix_length` parameter specifies that for a match to occur, a prefix of specified length must be shared by the input-term and the target text-element.
-
-When an operator field is used, the [.param]`operator` decides the boolean logic used to interpret the text in the match field. 
-
-For example, an operator value of `"and"` means the match query text would be treated like `location` AND `hostel`.  
-The default operator value of `"or"` means the match query text would be treated like `location` OR `hostel`.
-
-A demonstration of the match query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-== Example
-
-The following JSON object demonstrates the specification of a match query:
-
-[source,json]
-----
-{
- "match": "location hostel",
- "field": "reviews.content",
- "analyzer": "standard",
- "fuzziness": 2,
- "prefix_length": 4,
- "operator": "and"
-}
-----
-
diff --git a/modules/fts/pages/fts-supported-queries-non-analytic-query.adoc b/modules/fts/pages/fts-supported-queries-non-analytic-query.adoc
deleted file mode 100644
index ea1c1d1731..0000000000
--- a/modules/fts/pages/fts-supported-queries-non-analytic-query.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-= Non-Analytic Queries
-
-Non-analytic queries do not support analysis on their inputs.
-This means that only exact matches are returned.
-
-The following queries are non-Analytic queries:
-
-* xref:fts-supported-queries-term.adoc[Term]
-* xref:fts-supported-queries-phrase.adoc[Phrase]
-* xref:fts-supported-queries-prefix-query.adoc[Prefix]
-* xref:fts-supported-queries-regexp.adoc[Regexp]
-* xref:fts-supported-queries-fuzzy.adoc[Fuzzy]
-* xref:fts-supported-queries-wildcard.adoc[Wildcard]
-
-For information on analyzers, see xref:fts-index-analyzers.adoc[Understanding Analyzers].
diff --git a/modules/fts/pages/fts-supported-queries-numeric-range.adoc b/modules/fts/pages/fts-supported-queries-numeric-range.adoc
deleted file mode 100644
index 833bf9201c..0000000000
--- a/modules/fts/pages/fts-supported-queries-numeric-range.adoc
+++ /dev/null
@@ -1,38 +0,0 @@
-[#Numeric-Ranges]
-= Numeric Range Query
-
-A _numeric range_ query finds documents containing a numeric value in the specified field within the specified range.
-
-Define the endpoints using the fields [.param]`min` and [.param]`max`.
-You can omit any one endpoint, but not both.
-
-The [.param]`inclusive_min` and [.param]`inclusive_max` properties control whether or not the endpoints are included or excluded.
-
-By default, [.param]`min` is inclusive and [.param]`max` is exclusive.
-
-A demonstration of the numeric range Query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-== Example
-
-[source,json]
-----
-{
- "min": 100, "max": 1000,
- "inclusive_min": false,
- "inclusive_max": false,
- "field": "id"
-}
-----
-
-== Numeric Ranges
-
-You can specify numeric ranges with the `>`, `>=`, `<`, and `\<=` operators, each followed by a numeric value.
-
-=== Example
-
-[source,json]
-----
-`reviews.ratings.Cleanliness:>4` 
-----
-
-The above qeury performs numeric range query on the `reviews.ratings.Cleanliness` field, for values greater than 4.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-phrase.adoc b/modules/fts/pages/fts-supported-queries-phrase.adoc
deleted file mode 100644
index 53ce0135ef..0000000000
--- a/modules/fts/pages/fts-supported-queries-phrase.adoc
+++ /dev/null
@@ -1,17 +0,0 @@
-= Phrase Query
-
-A _phrase query_ searches for terms occurring at the specified position and offsets. It performs an exact term-match for all the phrase-constituents without using an analyzer.
-
-[source,json]
-----
-{
-  "terms": ["nice", "view"],
-  "field": "reviews.content"
-}
-----
-
-A demonstration of the phrase query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-// #How to specify the position and offset#
-
-// #Can we specify the full  query instead of small chunk?#
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-prefix-query.adoc b/modules/fts/pages/fts-supported-queries-prefix-query.adoc
deleted file mode 100644
index 990cc8be04..0000000000
--- a/modules/fts/pages/fts-supported-queries-prefix-query.adoc
+++ /dev/null
@@ -1,12 +0,0 @@
-= Prefix Query
-
-A _prefix_ query finds documents containing terms that start with the specified prefix.
-Please note that the prefix query is a non-analytic query, meaning it won't perform any text analysis on the query text.
-
-[source,json]
-----
-{
- "prefix": "inter",
- "field": "reviews.content"
-}
-----
diff --git a/modules/fts/pages/fts-supported-queries-query-string-query.adoc b/modules/fts/pages/fts-supported-queries-query-string-query.adoc
deleted file mode 100644
index 69fc2f1479..0000000000
--- a/modules/fts/pages/fts-supported-queries-query-string-query.adoc
+++ /dev/null
@@ -1,18 +0,0 @@
-= Query String Query
-
-A _query string_ can be used, to express a given query by means of a special syntax.
-
-[source,json]
-----
-{ "query": "+nice +view" }
-----
-
-A demonstration of a query string Query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
-
-NOTE: The Full Text Searches conducted with the Couchbase Web Console themselves use query strings.
-(See xref:fts-searching-from-the-UI.adoc[Searching from the UI].)
-
-Certain queries supported by FTS are not yet supported by the query string syntax.
-These include wildcards and regular expressions.
-
-More detailed information is provided in xref:fts-query-string-syntax.adoc[Query String Syntax]
diff --git a/modules/fts/pages/fts-supported-queries-range-query.adoc b/modules/fts/pages/fts-supported-queries-range-query.adoc
deleted file mode 100644
index 299430a67d..0000000000
--- a/modules/fts/pages/fts-supported-queries-range-query.adoc
+++ /dev/null
@@ -1,9 +0,0 @@
-= Range Queries
-
-Range Queries:: Accept ranges for dates and numbers, and return documents that contain values within those ranges.
-
-The following queries are range queries:
-
-* xref:fts-supported-queries-numeric-range.adoc[Numeric Range]
-* xref:fts-supported-queries-date-range.adoc[Date Range]
-* xref:fts-supported-queries-term-range.adoc[Term Range]
diff --git a/modules/fts/pages/fts-supported-queries-regexp.adoc b/modules/fts/pages/fts-supported-queries-regexp.adoc
deleted file mode 100644
index ba2088f375..0000000000
--- a/modules/fts/pages/fts-supported-queries-regexp.adoc
+++ /dev/null
@@ -1,14 +0,0 @@
-= Regexp Query
-
-A _regexp_ query finds documents containing terms that match the specified regular expression.
-Please note that the regex query is a non-analytic query, meaning it won't perform any text analysis on the query text.
-
-[source,json]
-----
-{
- "regexp": "inter.+",
- "field": "reviews.content"
-}
-----
-
-A demonstration of a regexp query using the Java SDK can be found in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
diff --git a/modules/fts/pages/fts-supported-queries-special-query.adoc b/modules/fts/pages/fts-supported-queries-special-query.adoc
deleted file mode 100644
index d751513547..0000000000
--- a/modules/fts/pages/fts-supported-queries-special-query.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-= Special Queries
-
-_Special_ queries are usually employed either in combination with other queries, or to test the system.
-
-The following queries are special queries:
-
-* xref:fts-supported-queries-match-all.adoc[Match All]
-* xref:fts-supported-queries-match-none.adoc[Match None]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-term-range.adoc b/modules/fts/pages/fts-supported-queries-term-range.adoc
deleted file mode 100644
index 1d36d4cac1..0000000000
--- a/modules/fts/pages/fts-supported-queries-term-range.adoc
+++ /dev/null
@@ -1,17 +0,0 @@
-= Term Range Query
-
-A _term range_ query finds documents containing a term in the specified field within the specified range.
-Define the endpoints using the fields [.param]`min` and [.param]`max`.
-You can omit one endpoint, but not both.
-The [.param]`inclusive_min` and [.param]`inclusive_max` properties control whether or not the endpoints are included or excluded.
-By default, [.param]`min` is inclusive and [.param]`max` is exclusive.
-
-[source,json]
-----
-{
- "min": "foo", "max": "foof",
- "inclusive_min": false,
- "inclusive_max": false,
- "field": "desc"
-}
-----
\ No newline at end of file
diff --git a/modules/fts/pages/fts-supported-queries-term.adoc b/modules/fts/pages/fts-supported-queries-term.adoc
deleted file mode 100644
index bb197bd70a..0000000000
--- a/modules/fts/pages/fts-supported-queries-term.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-= Term Query
-
-A term query is the simplest possible query. It performs an exact match in the index for the provided term.
-
-== Example
-
-[source,json]
-----
-{
-  "term": "locate",
-  "field": "reviews.content"
-}
-----
-
-A demonstration of term queries using the Java SDK can be found  in xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
diff --git a/modules/fts/pages/fts-supported-queries-wildcard.adoc b/modules/fts/pages/fts-supported-queries-wildcard.adoc
deleted file mode 100644
index c85a6c0dad..0000000000
--- a/modules/fts/pages/fts-supported-queries-wildcard.adoc
+++ /dev/null
@@ -1,16 +0,0 @@
-= Wildcard Query
-
-A _wildcard_ query uses a wildcard expression, to search within individual terms for matches.
-Wildcard expressions can be any single character (`?`) or zero to many characters (`*`).
-Wildcard expressions can appear in the middle or end of a term, but not at the beginning. 
-Please note that the wildcard query is a non-analytic query, meaning it won't perform any text analysis on the query text.
-
-[source,json]
-----
-{
- "wildcard": "inter*",
- "field": "reviews.content"
-}
-----
-
-A demonstration of a wildcard query using the Java SDK can be found in  xref:3.2@java-sdk::full-text-searching-with-sdk.adoc[Searching from the SDK].
diff --git a/modules/fts/pages/fts-supported-queries.adoc b/modules/fts/pages/fts-supported-queries.adoc
deleted file mode 100644
index 5d9e2b324a..0000000000
--- a/modules/fts/pages/fts-supported-queries.adoc
+++ /dev/null
@@ -1,31 +0,0 @@
-= Supported Queries
-:page-aliases: query-types.adoc
-
-[abstract]
-With Full Text Search you can perform queries on Full Text Indexes. You can perform the queries either by using Couchbase Web Console, the Couchbase REST API, {sqlpp} (using search functions in the Query service), or the Couchbase SDK.
-
-[#query-specification-options]
-== Query-Specification Options
-
-Full Text Search allows a range of query options. These include:
-
-* Input-text and target-text can be _analyzed_: this transforms input-text into _token-streams_, according to different specified criteria, so allowing richer and more finely controlled forms of text-matching.
-* The _fuzziness_ of a query can be specified so that the scope of matches can be constrained to a particular level of exactitude.
-A high degree of fuzziness means that a large number of partial matches may be returned.
-* Multiple queries can be specified for simultaneous processing, with one given a higher _boost_ than another, so ensuring that its results are returned at the top of the set.
-* _Regular expressions_ and _wildcards_ can be used in text-specification for search-input.
-* _Compound_ queries can be designed, such that appropriate conjunction or disjunction of the total result-set can be returned.
-
-For information on how to execute queries, see xref:fts-searching-from-the-UI.adoc[Searching from the UI].
-
-This section includes the following supported queries:
-
-* xref:fts-supported-queries-query-string-query.adoc[Query String Query]
-* xref:fts-supported-queries-match.adoc[Match]
-* xref:fts-supported-queries-match-phrase.adoc[Match Phrase]
-* xref:fts-supported-queries-non-analytic-query.adoc[Non Analytic]
-* xref:fts-supported-queries-compound-query.adoc[Compound]
-* xref:fts-supported-queries-range-query.adoc[Range]
-* xref:fts-supported-queries-geo-spatial.adoc[Geospatial]
-* xref:fts-supported-queries-special-query.adoc[Special]
-* xref:fts-supported-queries-query-options.adoc[Query Options]
diff --git a/modules/fts/pages/fts-type-identifiers.adoc b/modules/fts/pages/fts-type-identifiers.adoc
deleted file mode 100644
index 3b70e88f09..0000000000
--- a/modules/fts/pages/fts-type-identifiers.adoc
+++ /dev/null
@@ -1,28 +0,0 @@
-= Specifying Type Identifiers
-
-A _type identifier_ allows the documents in a bucket to be identified filtered for inclusion into the index according to their _type_. When the *Add Index, Edit Index*, or *Clone Index* screen is accessed, a *Type Identifier* panel is displayed:
-
-[#type_identifier_image]
-image::fts-type-identifier-ui.png[,75%]
-
-There are three options, each of which gives the index a particular way of determining the type of each document in the bucket.   
-This document filtering via the *Type Identifier* is only active is you append a `.` and then a `substring` to the end of the `scope.collection` in the Type Mapping.
-
-== JSON type field 
-It is the name of a document field. The value specified for this field is used by the index to determine the type of document.
-
-NOTE: FTS Indexing does not work for fields having a dot (. or period) in the field name. Users must avoid adding a dot (. or period) in the field name. +
-*Unsupported field names*: `field.name` or `country.name`. For example, `{ "database.name": "couchbase"}` +
-*Supported field names*: `fieldname` or `countryname`. For example, `{ "databasename": "couchbase"}`
-
-The default value is type: meaning that the index searches for a field in each document whose name is type. 
-
-Each document that contains a field with that name is duly included in the index, with the value of the field specifying the type of the document. 
-
-NOTE: The value of the field should be of text type and cannot be an array or JSON object.
-
-== Doc ID up to separator
-The characters in the ID of each document, up to but not including the separator. For example, if the document’s ID is `hotel_10123`, the value `hotel` is determined by the index to be the type of document. The value entered into the field should be the separator-character used in the ID: for example, `_`, if that character is the underscore
-
-== Doc ID with regex
-A  *https://github.com/google/re2/wiki/Syntax[RE2]* regular expression that is applied by the index to the ID of each document. The resulting value is determined to be the type of the  document. (This option may be used when the targeted document-subset contains neither a suitable *JSON type field* nor an ID that follows a naming convention suitable for *Doc ID up to separator*.) The value entered into the field should be the regular expression to be used.
diff --git a/modules/fts/pages/fts-type-mapping-specifying-fields.adoc b/modules/fts/pages/fts-type-mapping-specifying-fields.adoc
deleted file mode 100644
index 82ebb967aa..0000000000
--- a/modules/fts/pages/fts-type-mapping-specifying-fields.adoc
+++ /dev/null
@@ -1,27 +0,0 @@
-[#specifying-fields-for-type-mapping]
-= Specifying fields for Type Mapping
-
-A Full Text Index can be defined not only to include (or exclude) documents of a certain type but also to include (or exclude) specified fields within each of the typed documents.
-
-To specify one or more fields, hover with the mouse cursor over a row in the Type Mappings panel that contains an enabled type mapping. Buttons labeled *edit* and *+* appear:
-
-image::fts-type-mappings-ui-fields-buttons.png[,700,align=left]
-
-Left-clicking on the *edit* button displays the following interface:
-
-image::fts-type-mappings-ui-edit.png[,500,align=left]
-
-This allows the mapping to be deleted or associated with a different analyzer. 
-
-NOTE: FTS Indexing does not work for fields having a dot (. or period) in the field name. Users must avoid adding dot (. or period) in the field name. Like using `field.name` or `country.name` is not supported. For example, `{ "database.name": "couchbase"}`
-
-If the *only index specified fields* checkbox is checked, only fields specified by the user are included in the index.
-
-Left-clicking on the *+* button displays a pop-up that features two options:
-
-image::fts-type-mappings-ui-field-options.png[,700,align=left]
-
-These options are described in the following sections.
-
-* xref:fts-type-mappings-add-child-mappings.adoc[Add Child Mapping]
-* xref:fts-type-mappings-add-child-field.adoc[Add Child Field]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-Docid-with-regexp.adoc b/modules/fts/pages/fts-type-mappings-Docid-with-regexp.adoc
deleted file mode 100644
index 67b19f2031..0000000000
--- a/modules/fts/pages/fts-type-mappings-Docid-with-regexp.adoc
+++ /dev/null
@@ -1,66 +0,0 @@
-= DocID with regexp in Type Mappings
-
-“Doc ID with regexp” is another way the search service allows the user to extract “type identifiers” for indexing.
-
-* Set up a valid regular expression within docid_regexp. Remember this will be applied on the document IDs.
-* Choose a type mapping name that is considered a match for the regexp. 
-* The type mapping name CANNOT be a regexp.
-
-For example, while working with the `travel-sample` bucket,  set up docid_regexp to `air[a-z]{4}` and use the following type mappings.
-* airline
-* airport
-
-Below is a full index definition using it.
-{
-    "name": "airline-airport-index",
-    "type": "fulltext-index",
-    "params": {
-              "doc_config": {
-              "docid_prefix_delim": "",
-              "docid_regexp": "air[a-z]{4}",
-              "mode": "docid_regexp",
-              "type_field": "type"
-              },
-    "mapping": {
-              "default_analyzer": "standard",
-              "default_datetime_parser": "dateTimeOptional",
-              "default_field": "_all",
-              "default_mapping": {
-              "dynamic": true,
-              "enabled": false
-              },
-      "default_type": "_default",
-      "docvalues_dynamic": false,
-      "index_dynamic": true,
-      "store_dynamic": false,
-      "type_field": "_type",
-  "types": {
-              "airline": {
-              "dynamic": true,
-              "enabled": true
-              },
-            "airport": {
-            "dynamic": true,
-            "enabled": true
-              }
-            }
-          },
-      "store": {
-      "indexType": "scorch",
-      "segmentVersion": 15
-      }
-     },
-    "sourceType": "gocbcore",
-    "sourceName": "travel-sample",
-    "sourceParams": {},
-    "planParams": {
-      "indexPartitions": 1
-      }
-}
-
-So setting this as the index definition would index all attributes of documents with “airline” or "airport" in its document IDs.
-
-image::fts-type-mapping-regexp-with-docid.png[,750,align=left]
-
-Note: The golang regexp support is based on 
-xref:https://github.com/google/re2/wiki/Syntax[Access the github link] 
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-analyzer.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-analyzer.adoc
deleted file mode 100644
index a2eaba9ba3..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-analyzer.adoc
+++ /dev/null
@@ -1,9 +0,0 @@
-= Child Field Analyzer
-
-An analyzer optionally to be used for the field.
-The list of available analyzers can be displayed by means of the field's pull-down menu, and can be  selected from.
-
-== Example
-
-image::fts-type-mappings-child-field-analysers.png[,200,align=left]
-
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-docvalues.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-docvalues.adoc
deleted file mode 100644
index babf2aca70..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-docvalues.adoc
+++ /dev/null
@@ -1,13 +0,0 @@
-= Child Field DocValues
-
-To include the value for each instance of the field in the index, the docvalues checkbox must be checked. This is essential for xref:fts-search-response-facets.adoc[Facets].
-
-For sorting of search results based on field values: see xref:fts-sorting.adoc[Sorting Query Results].
-
-By default, this checkbox is selected. If it is _unchecked_, the values are _not_ added to the index; and in consequence, neither Search Facets nor value-based result-sorting is supported.
-
-== Example
-
-image::fts-type-mappings-child-field-docvalues.png[,750,align=left]
-
-NOTE: When this checkbox is checked, the resulting index will increase proportionately in size.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-field-name.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-field-name.adoc
deleted file mode 100644
index bd3f7f2e29..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-field-name.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-= Child Field Name
-
-The name of any field within the document that contains a single value or an array, rather than a JSON object.
-
-
-== Example
-
-image::fts-type-mappings-child-field-field-name.png[,750,align=left]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-field-searchable-as.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-field-searchable-as.adoc
deleted file mode 100644
index 795301304c..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-field-searchable-as.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-= Child Field Searchable As
-
-Typically identical to the [.ui]*field* (and dynamically supplied during text-input of the [.ui]*field*-value).
-This can be modified, to indicate an alternative field-name, whose associated value thereby becomes included in the indexed content, rather than that associated with the field-name specified in *field*.
-
-== Example
-
-image::fts-type-mappings-child-field-field-searchable-as.png[,750,align=left]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-field-type.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-field-type.adoc
deleted file mode 100644
index 01ad86e447..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-field-type.adoc
+++ /dev/null
@@ -1,11 +0,0 @@
-= Child Field Type
-
-The _data-type_ of the value of the field.
-This can be `text`, `number`, `datetime`, `boolean`, `disabled`, or `geopoint`; and can be selected from the field's pull-down menu, as follows:
-
-[#fts_type_mappings_ui_select_data_type]
-image::fts-type-mappings-ui-select-data-type.png[,300,align=left]
-
-== Example
-
-image::fts-type-mappings-child-field-type.png[,750,align=left]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-include-in-all-field.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-include-in-all-field.adoc
deleted file mode 100644
index 1de8838469..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-include-in-all-field.adoc
+++ /dev/null
@@ -1,16 +0,0 @@
-= Child Field - Include in_all field: 
-
-When checked, the field is included in the definition of [.ui]*_all*, which is the field specified by default in the [.ui]*Advanced* panel.
-When unchecked, the field is not included.
-
-Inclusion means when _query strings_ are used to specify searches, the text in the current field is searchable without the field name requiring a prefix.
-For Example, a search on description:modern can be accomplished simply by specifying the word ‘modern’. This is applicable for all query types and not just limited to query string query type. 
-
-
-== Example
-
-image::fts-type-mappings-child-field-include-in-all.png[,750,align=left]
-
-NOTE: "include in _all" will write a copy of the tokens generated for a particular field to the "_all" composite field. When this checkbox is checked, the resulting index will proportionately increase in size.
-
-Enabling this option results in larger indexes, so disable this option to always use field scoped queries in the search requests.
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-include-term-vectors.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-include-term-vectors.adoc
deleted file mode 100644
index 1f4283d11a..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-include-term-vectors.adoc
+++ /dev/null
@@ -1,14 +0,0 @@
-= Child Field - Include term vectors
-
-When checked, term vectors are included.
-When unchecked, term vectors are not included.
-
-Term vectors are the locations of terms in a particular field.
-Certain kinds of functionality (such as highlighting, and phrase search) require term vectors.
-Inclusion of term vectors results in larger indexes and correspondingly slower index build-times.
-
-== Example
-
-image::fts-type-mappings-child-field-termvectors.png[,750,align=left]
-
-NOTE: "include term vectors" indexes the array positions (locations) of the terms within the field (needed for phrase searching and highlighting). When this checkbox is checked, the resulting index will proportionately increase in size.
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-index.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-index.adoc
deleted file mode 100644
index 8595dcb93c..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-index.adoc
+++ /dev/null
@@ -1,10 +0,0 @@
-= Child Field Index
-
-When checked, the field is indexed; when unchecked, the field is not indexed.
-This may be used, therefore, to explicitly remove an already-defined field from the index.
-
-== Example
-
-image::fts-type-mappings-child-field-index.png[,750,align=left]
-
-NOTE: When this checkbox is checked, the resulting index will proportionately increase in size.
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field-store.adoc b/modules/fts/pages/fts-type-mappings-add-child-field-store.adoc
deleted file mode 100644
index c171f9feeb..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field-store.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-= Child Field Store
-
-When the child field 'store' option is checked, the original field content is included in the FTS index, enabling the retrieval of stored field values during a search operation. 
-
-When unchecked, the original field content is not included in the FTS index. Storing the field within the index is necessary to support highlighting, which also needs "term vectors” for the field to be indexed.
-
-== Example 
-image::fts-type-mappings-child-field-store.png[,700,align=left]
-
-Ideally, enabling this 'Child Field Store' option has a sizing aspect to the index definition. This option also permits highlighting of search texts in the returned results, so that matched expressions can be easily seen. However, enabling this option also results in larger indexes and slightly longer indexing times.
-The field content will show up in queries (when the index has the store option checked) only when requested. There is a ‘fields’ section in the query for it.
-
-----
-{
-"query": {...},
-"fields": ["store_field_name"]
-}
-Setting "fields" to ["*"] will include the contents of all stored fields in the index.
-----
-
-NOTE:  "store" - writes a copy of the field content into the index. When this checkbox is checked, the resulting index will proportionately increase in size.
-
diff --git a/modules/fts/pages/fts-type-mappings-add-child-field.adoc b/modules/fts/pages/fts-type-mappings-add-child-field.adoc
deleted file mode 100644
index 57730dd77c..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-field.adoc
+++ /dev/null
@@ -1,42 +0,0 @@
-= Add Child Field
-
-The option [.ui]*insert child field* allows a field to be individually included for (or excluded from) indexing, provided that it contains a single value or an array rather than a JSON object.
-Selecting this option displays the following:
-
-[#fts_type_mappings_child_field_dialog]
-image::fts-type-mappings-child-field-dialog.png[,700,align=left]
-
-The interactive fields and checkboxes are:
-
-** xref:fts-type-mappings-add-child-field-field-name.adoc[Field Name]
-
-** xref:fts-type-mappings-add-child-field-field-type.adoc[Field Type]
-
-** xref:fts-type-mappings-add-child-field-field-searchable-as.adoc[Field Searchable As]
-
-** xref:fts-type-mappings-add-child-field-analyzer.adoc[Analyzer]
-
-** xref:fts-type-mappings-add-child-field-index.adoc[Index]
-
-** xref:fts-type-mappings-add-child-field-store.adoc[Store]
-
-** xref:fts-type-mappings-add-child-field-include-term-vectors.adoc[Include term vectors]
-
-** xref:fts-type-mappings-add-child-field-include-in-all-field.adoc[Include in _all field]
-
-** xref:fts-type-mappings-add-child-field-docvalues.adoc[DocValues]
-
-
-
-The dialog, when completed, might look as follows:
-
-[#fts_type-mappings_child_field_dialog_complete]
-image::fts-type-mappings-child-field-dialog-complete.png[,700,align=left]
-
-Left-click on [.ui]*OK*.
-The field is saved, and its principal attributes displayed on a new row:
-
-[#fts_type-mappings_child_field_saved]
-image::fts-type-mappings-child-field-saved.png[,700,align=left]
-
-Note that when this row is hovered over with the mouse, an *Edit* button appears, whereby updates to the definition can be made.
diff --git a/modules/fts/pages/fts-type-mappings-add-child-mappings.adoc b/modules/fts/pages/fts-type-mappings-add-child-mappings.adoc
deleted file mode 100644
index 260b7a04d1..0000000000
--- a/modules/fts/pages/fts-type-mappings-add-child-mappings.adoc
+++ /dev/null
@@ -1,29 +0,0 @@
-[#inserting-a-child-mapping]
-= Add Child Mapping
-
-The option [.ui]*insert child mapping* specifies a document-field whose value is a JSON object.
-Selecting this option displays the following:
-
-[#fts_type_mappings_child_mapping_dialog]
-image::fts-type-mappings-child-mapping-dialog.png[,700,align=left]
-
-The following interactive field and checkbox are displayed:
-
-* [.ui]*{}*: The name of a field whose value is a JSON object.
-Note that an analyzer for the field, by means of the pull-down menu.
-* [.ui]*only index specified fields*: When checked, only fields explicitly specified are added to the index.
-Note that the JSON object specified as the value for [.ui]*{}* has multiple fields of its own.
-Checking this box ensures that all or a subset of these can be selected for indexing.
-
-When completed, this panel might look as follows (note that `reviews` is a field within the `hotel`-type documents of the `travel-sample` bucket whose value is a JSON object):
-
-[#fts_type_mappings_child_mapping_dialog_complete]
-image::fts-type-mappings-child-mapping-dialog-complete.png[,700,align=left]
-
-Save by left-clicking *OK*.
-The field is now displayed as part of the `hotel` type mapping.
-Note that by hovering over the `reviews` row with the mouse, the [.ui]*Edit* and [.ui]*+* buttons are revealed: the [.ui]*+* button is present because `reviews` is an object that contains child-fields; which can now themselves be individually indexed.
-Left-click on this, and a child-field, such as `content`, can be specified:
-
-[#fts_type_mappings_child_mapping_add_field]
-image::fts-type-mappings-child-mapping-add-field.png[,700,align=left]
\ No newline at end of file
diff --git a/modules/fts/pages/fts-type-mappings.adoc b/modules/fts/pages/fts-type-mappings.adoc
deleted file mode 100644
index b2214e170d..0000000000
--- a/modules/fts/pages/fts-type-mappings.adoc
+++ /dev/null
@@ -1,465 +0,0 @@
-[#specifying-type-mappings]
-= Specifying Type Mappings
-
-[abstract]
-Whereas a _type identifier_ tells the index how to determine the position in each document of the characters that specify the document's type, a _type mapping_ specifies the characters themselves.
-
-If *Doc ID up to separator* is used as a type identifier, and the underscore is specified as the separator-character, a type mapping of _hotel_ ensures that `hotel_10123`, rather than `airline_10`, is indexed.
-
-When the [.ui]*Add Index*, [.ui]*Edit Index*, or [.ui]*Clone Index* screen is accessed, the [.ui]*Type Mappings* panel can be opened.
-
-The default setting is displayed:
-
-[#fts_type_mappings_ui_closed]
-image::fts-type-mappings-ui-closed.png[,750,align=left]
-
-Left-click on the *+ Add Type Mapping* button.
-The display now appears as follows:
-
-[#fts_type_mappings_ui_add]
-image::fts-type-mappings-ui-add.png[,750,align=left]
-
-The display indicates that a single type mapping is currently defined, which is `default`.
-
-This is a special type mapping created by every index automatically: it is applied to each document whose type _either_ does not match a user-specified type mapping, _or_ has no recognized type attribute.
-Therefore, if the default mapping is left enabled, all documents are included in the index, regardless of whether the user actively specifies type mappings.
-
-To ensure that only documents corresponding to the user's specified type mappings are included in the index, the default type mapping must be disabled (see below for an example).
-
-Each type mapping is listed as either *dynamic*, meaning that all fields are considered available for indexing, or *only index specified fields*, meaning that only fields specified by the user are indexed.
-
-Therefore, specifying the default index with dynamic mapping creates a large index whose response times may be relatively slow; and is, as such, an option potentially unsuitable for the most production deployments.
-
-For information on how values are data-typed when dynamic mapping is specified, see the section below, xref:#document-fields-and-data-types[Document Fields and Data Types].
-
-To specify a type mapping, type an appropriate string (for example, `hotel`) into the interactive text field.
-Note the [.ui]*only index specified fields* checkbox: if this is checked, only user-specified fields from the document are included in the index.
-(For an example, see xref:fts-type-mapping-specifying-fields.adoc[Specifying Fields], below.)
-
-Optionally, an _analyzer_ can be specified for the type mapping: for all queries that do indeed support the use of an analyzer, the specified analyzer will be applied, rather than the default analyzer (which is itself specified in the *Advanced* pane, as described below, in xref:fts-creating-index-specifying-advanced-settings.adoc[Specifying Advanced Settings]).
-
-A list of available analyzers can be accessed and selected from, by means of the pull-down menu to the right of the interactive text-field:
-
-[#fts_type_mappings_ui_analyzers_menu]
-image::fts-type-mappings-ui-analyzers-menu.png[,620,align=left]
-
-The default value, `inherit`, means that the type mapping inherits the default analyzer.
-Note that custom analyzers can be created and stored for the index that is being defined using the [.ui]*Analyzers* panel, described below in xref:fts-analyzers.adoc#Creating-Analyzers[Creating Analyzers].
-On creation, all custom analyzers are available for association with a type mapping, and so appear in the pull-down menu shown above.
-
-Additional information on analyzers can also be found on the page xref:fts-analyzers.adoc#Understanding-Analyzers[Understanding Analyzers].
-
-The [.ui]*Type Mappings* panel now appears as follows:
-
-[#fts_type_mappings_ui_addition_both_checked]
-image::fts-type-mappings-ui-addition-both-checked.png[,750,align=left]
-
-Note that the checkbox to the left of each of the two specified type mappings, `hotel` and `default`, is checked.
-
-Because `default` is checked, _all_ documents in the bucket (not merely those that correspond to the `hotel` type mapping) will be included in the index.
-To ensure that only `hotel` documents are included, _uncheck_ the checkbox for `default`.
-The panel now appears as follows:
-
-[#fts_type_mappings_ui_addition_default_unchecked]
-image::fts-type-mappings-ui-addition-default-unchecked.png[,750,align=left]
-
-Note also that should you wish to ensure that all documents in the bucket are included in the index _except_ those that correspond to the `hotel` type mapping, _uncheck_ the checkbox for `hotel`, and _check_ the `default` checkbox:
-
-[#fts_type_mappings_ui_addition_default_checked]
-image::fts-type-mappings-ui-addition-default-checked.png[,750,align=left]
-
-== Specifying Type Mapping for Collection
-
-Type Mapping will allow you to search for documents from the selected scope, selected collections from the scope, and for a specific document type from the selected scope and collections.
-
-For using non-default scope/collections, please refer: xref:fts-creating-index-from-UI.adoc#using-non-default-scope-collections[Using Non-Default Scope/Collections].
-
-** Left click on the *+ Add Type Mapping* button. The display now appears as follows:
-
-image::fts-type-mapping-for-collection.png[,700,align=left]
-
-In the Type Mappings, you can add mapping of a *single collection* or *multiple collections*. To specify the collection, click the Collection drop-down list and select the required collection.
-
-The *Collection* field displays the selected collection along with the selected scope. For example, inventory.airport or inventory.hotel.
-
-** Click ok to add the collection to the index. Continue the same process to add other collections to the index.
-
-NOTE: In Type Mappings, you can add multiple collections to the index. However, you can either select only one collection to create a single collection index or select multiple collections to create an index with multiple collections. 
-
-The Type Mappings panel appears as follows:
-
-== Type Mapping with Single Collection
-
-With a single collection index, you can search documents only from a single collection specified in the Type Mappings.
-
-image::fts-type-mappings-single-collection.png[,750,align=left]
-
-== Type Mapping with Multiple Collections
-
-With multiple collections index, you can search documents across multiple collections (within a single scope) specified in the Type Mappings.
-
-image::fts-type-mappings-multiple-collections.png[,750,align=left]
-
-== Type Mapping with Specific Document Type
-
-With a specific document type, you can search documents of a specific type from a single collection or multiple collections. Every document in Couchbase includes the type field that represents the type of the document. For example, the type “airport” represents the documents related to airport information.
-
-image:fts-type-mapping-with-specific-document-type.png[,,align=left]
-
-If you want to search for a specific document type from a single collection or multiple collections, you can manually specify the document type after the collection in the Collection field. For example, inventory.airline.airport or inventory.route.airport.
-
-Now, when you search for the airport document type, the index will display all documents from a single collection or multiple collections where the type field is the airport.
-
-image:fts-display-type-field.png[,750,align=left]
-
-You can click the document link and verify the document type.
-
-[#document-type-with-single-collections]
-== Document Type with single collection
-
-Every document in Couchbase includes the type field that represents the type of the document. For example, type “airport” represents the documents related to airport information.
-
-If you want to search for a specific document type from a single collection, you can manually specify the document type after the collection in the Collection field.
-
-For example, inventory.airline.airport or inventory.route.airport
-
-image:fts-type-mapping-specific-document-type-single-collection.png[,750,align=left]
-
-Now, when you search for the airport document type, the index will display all documents from a single collection where the type field is airport.
-
-[#document-type-with-multiple-collections]
-== Document Type with multiple collections
-
-Every document in Couchbase includes the type field that represents the type of the document. For example, type “airport” represents the documents related to airport information.
-
-If you want to search for a specific document type from the multiple collections, you can manually specify the document type after the collection in the Collection field.
-
-For example, inventory.airline.airport or inventory.route.airport
-
-image:fts-type-mapping-specific-document-type-multiple-collections.png[,750,align=left]
-
-Now, when you search for the airport document type, the index will display all documents from the multiple collections where the type field is airport.
-
-[#document-fields-and-data-types]
-== Document-Fields and Data-Types
-
-During index creation, for each document-field for which the data-type has not been explicitly specified (which is to say, *text*, *number*, *datetime*, *boolean*, *disabled*, *geopoint*, or *geoshape*), the field-value is examined, and the best-possible determination made, as follows:
-
-|===
-| Type of JSON value | Indexed as\...
-
-| Boolean
-| Boolean
-
-| Number
-| Number
-
-| String containing a date
-| Date
-
-| String (not containing a date)
-| String
-
-| Geopoint
-| A xref:fts-supported-queries-geopoint-spatial.adoc#recognizing_target_data[legacy lat/lon pair]
-
-| Geoshape
-| A xref:fts-supported-queries-geojson-spatial.adoc#supported-geojson-data-types[GeoJSON shape]
-
-|===
-
-NOTE: The indexer attempts to parse String date-values as dates, and indexes them as such if the operation succeeds. However, on query-execution, Full Text Search expects dates to be in the format specified by https://www.ietf.org/rfc/rfc3339.txt[RFC-3339^], which is a specific profile of ISO-8601. 
-
-The String values such as `7` or `true` remains as Strings and did not index as numbers or Booleans respectively.
-
-The number-type is modeled as a 64-bit floating-point value internally.
-
-[#exclude-fields-from-dynamic-fts-index]
-== Excluding child field/ child mapping from a dynamic FTS index 
-
-If you want to index everything except a child field or a child mapping, you add that child mapping and child field and turn off the child mapping and the *Index* option, respectively.
-
-Perform the following steps:
-
-1. In the index, add a type mapping and set it to dynamic.
-2. In the type mapping, add a child field.
-3. For the fields, uncheck the *Index* option from its settings.
-4. For the mapping, uncheck the corresponding dynamic type mapping check box to disable it.
-
-[#specifying-fields-for-type-mapping]
-== Specifying fields for Type Mapping
-
-A Full Text Index can be defined not only to include (or exclude) documents of a certain type but also to include (or exclude) specified fields within each of the typed documents.
-
-To specify one or more fields, hover with the mouse cursor over a row in the Type Mappings panel that contains an enabled type mapping. Buttons labeled *edit* and *+* appear:
-
-image::fts-type-mappings-ui-fields-buttons.png[,700,align=left]
-
-Left-clicking on the *edit* button displays the following interface:
-
-image::fts-type-mappings-ui-edit.png[,700,align=left]
-
-This allows the mapping to be deleted or associated with a different analyzer. 
-
-NOTE: FTS Indexing does not work for fields having a dot (. or period) in the field name. Users must avoid adding dot (. or period) in the field name. Like using `field.name` or `country.name` is not supported. For example, `{ "database.name": "couchbase"}`
-
-If the *only index specified fields* checkbox is checked, only fields specified by the user are included in the index.
-
-Left-clicking on the *+* button displays a pop-up that features two options:
-
-image::fts-type-mappings-ui-field-options.png[,700,align=left]
-
-These options are described in the following sections.
-
-* xref:fts-type-mappings-add-child-mappings.adoc[Add Child Mapping]
-* xref:fts-type-mappings-add-child-field.adoc[Add Child Field]
-
-[#inserting-a-child-mapping]
-== Add Child Mapping
-
-The option [.ui]*insert child mapping* specifies a document-field whose value is a JSON object.
-Selecting this option displays the following:
-
-[#fts_type_mappings_child_mapping_dialog]
-image::fts-type-mappings-child-mapping-dialog.png[,700,align=left]
-
-The following interactive field and checkbox are displayed:
-
-* [.ui]*{}*: The name of a field whose value is a JSON object.
-Note that an analyzer for the field is specified by means of the pull-down menu.
-* [.ui]*only index specified fields*: When checked, only fields explicitly specified are added to the index.
-Note that the JSON object specified as the value for [.ui]*{}* has multiple fields of its own.
-Checking this box ensures that all or a subset of these can be selected for indexing.
-
-When completed, this panel might look as follows (note that `reviews` is a field within the `hotel`-type documents of the `travel-sample` bucket whose value is a JSON object):
-
-[#fts_type_mappings_child_mapping_dialog_complete]
-image::fts-type-mappings-child-mapping-dialog-complete.png[,700,align=left]
-
-Save by left-clicking *OK*.
-The field is now displayed as part of the `hotel` type mapping.
-Note that by hovering over the `reviews` row with the mouse, the [.ui]*Edit* and [.ui]*{plus}* buttons are revealed: the [.ui]*+* button is present because `reviews` is an object that contains child-fields; which can now themselves be individually indexed.
-Left-click on this, and a child-field, such as `content`, can be specified:
-
-[#fts_type_mappings_child_mapping_add_field]
-image::fts-type-mappings-child-mapping-add-field.png[,700,align=left]
-
-
-== Add Child Field
-
-The option [.ui]*insert child field* allows a field to be individually included for (or excluded from) indexing, provided that it contains a single value or an array rather than a JSON object.
-Selecting this option displays the following:
-
-[#fts_type_mappings_child_field_dialog]
-image::fts-type-mappings-child-field-dialog.png[,700,align=left]
-
-The interactive fields and checkboxes are:
-
-* *Field Name*
-* *Field Type*
-* *Field Searchable As*
-* *Analyzer*
-* *Index*
-* *Store*
-* *Include term vectors*
-* *Include in _all field*
-* *DocValues*
-
-=== Field Name
-
-The name of any field within the document that contains a single value or an array, rather than a JSON object.
-
-==== Example
-
-image::fts-type-mappings-child-field-field-name.png[,750,align=left]
-
-=== Field Type
-
-The _data-type_ of the value of the field.
-This can be `text`, `number`, `datetime`, `boolean`, `disabled`, or `geopoint`; and can be selected from the field's pull-down menu, as follows:
-
-[#fts_type_mappings_ui_select_data_type]
-image::fts-type-mappings-ui-select-data-type.png[,750,align=left]
-
-==== Example
-
-image::fts-type-mappings-child-field-type.png[,750,align=left]
-
-=== Field Searchable As
-
-Typically identical to the [.ui]*field* (and dynamically supplied during text-input of the [.ui]*field*-value).
-This can be modified, to indicate an alternative field-name, whose associated value thereby becomes included in the indexed content, rather than that associated with the field-name specified in *field*.
-
-==== Example
-
-image::fts-type-mappings-child-field-field-searchable-as.png[,750,align=left]
-
-=== Field Analyzer
-
-An analyzer optionally to be used for the field.
-The list of available analyzers can be displayed by means of the field's pull-down menu, and can be  selected from.
-
-==== Example
-
-image::fts-type-mappings-child-field-analysers.png[,750,align=left]
-
-=== Index
-
-When checked, the field is indexed; when unchecked, the field is not indexed.
-This may be used, therefore, to explicitly remove an already-defined field from the index.
-
-==== Example
-
-image::fts-type-mappings-child-field-index.png[,750,align=left]
-
-NOTE: When this checkbox is checked, the resulting index will proportionately increase in size.
-
-=== Store
-
-When the child field `store` option is checked, the original field content is included in the FTS index, enabling the retrieval of stored field values during a search operation. 
-
-When unchecked, the original field content is not included in the FTS index. Storing the field within the index is necessary to support highlighting, which also needs "term vectors” for the field to be indexed.
-
-==== Example 
-image::fts-type-mappings-child-field-store.png[,700,align=left]
-
-Ideally, enabling this `Child Field Store` option has a sizing aspect to the index definition. This option also permits highlighting of search texts in the returned results, so that matched expressions can be easily seen. However, enabling this option also results in larger indexes and slightly longer indexing times.
-The field content will show up in queries (when the index has the store option checked) only when requested. There is a ‘fields’ section in the query for it.
-
-----
-{
-"query": {...},
-"fields": ["store_field_name"]
-}
-Setting "fields" to ["*"] will include the contents of all stored fields in the index.
-----
-
-NOTE:  "store" - writes a copy of the field content into the index. When this checkbox is checked, the resulting index will proportionately increase in size.
-
-=== Include term vectors
-
-When checked, term vectors are included.
-When unchecked, term vectors are not included.
-
-Term vectors are the locations of terms in a particular field.
-Certain kinds of functionality (such as highlighting, and phrase search) require term vectors.
-Inclusion of term vectors results in larger indexes and correspondingly slower index build-times.
-
-==== Example
-
-image::fts-type-mappings-child-field-termvectors.png[,750,align=left]
-
-NOTE: "include term vectors" indexes the array positions (locations) of the terms within the field (needed for phrase searching and highlighting). When this checkbox is checked, the resulting index will proportionately increase in size.
-
-=== Include in_all field: 
-
-When checked, the field is included in the definition of [.ui]*+_all+*, which is the field specified by default in the [.ui]*Advanced* panel.
-When unchecked, the field is not included.
-
-Inclusion means when _query strings_ are used to specify searches, the text in the current field is searchable without the field name requiring a prefix.
-For Example, a search on description:modern can be accomplished simply by specifying the word ‘modern’. This is applicable for all query types and not just limited to query string query type. 
-
-==== Example
-
-image::fts-type-mappings-child-field-include-in-all.png[,750,align=left]
-
-NOTE: "include in _all" will write a copy of the tokens generated for a particular field to the "_all" composite field. When this checkbox is checked, the resulting index will proportionately increase in size.
-
-Enabling this option results in larger indexes, so disable this option to always use field scoped queries in the search requests.
-
-=== DocValues
-
-To include the value for each instance of the field in the index, the docvalues checkbox must be checked. This is essential for xref:fts-search-response-facets.adoc[Facets].
-
-For sorting of search results based on field values: see xref:fts-sorting.adoc[Sorting Query Results].
-
-By default, this checkbox is selected. If it is _unchecked_, the values are _not_ added to the index; and in consequence, neither Search Facets nor value-based result-sorting is supported.
-
-==== Example
-
-image::fts-type-mappings-child-field-docvalues.png[,750,align=left]
-
-NOTE: When this checkbox is checked, the resulting index will increase proportionately in size.
-
-The dialog, when completed, might look as follows:
-
-[#fts_type-mappings_child_field_dialog_complete]
-image::fts-type-mappings-child-field-dialog-complete.png[,700,align=left]
-
-Left-click on [.ui]*OK*.
-The field is saved, and its principal attributes displayed on a new row:
-
-[#fts_type-mappings_child_field_saved]
-image::fts-type-mappings-child-field-saved.png[,700,align=left]
-
-NOTE: When you hover the mouse over this row, an *Edit* button appears, where you can make updates to the definition.
-
-== DocID with regexp in Type Mappings
-
-“Doc ID with regexp” is another way the search service allows the user to extract “type identifiers” for indexing.
-
-* Set up a valid regular expression within docid_regexp. Remember this will be applied on the document IDs.
-* Choose a type mapping name that is considered a match for the regexp. 
-* The type mapping name CANNOT be a regexp.
-
-For example, while working with the `travel-sample` bucket,  set up docid_regexp to `air[a-z]{4}` and use the following type mappings.
-* airline
-* airport
-
-Below is a full index definition using it.
-[source, json]
-----
-{
-  "name": "airline-airport-index",
-  "type": "fulltext-index",
-  "params": {
-    "doc_config": {
-      "docid_prefix_delim": "",
-      "docid_regexp": "air[a-z]{4}",
-      "mode": "docid_regexp",
-      "type_field": "type"
-    },
-    "mapping": {
-      "default_analyzer": "standard",
-      "default_datetime_parser": "dateTimeOptional",
-      "default_field": "_all",
-      "default_mapping": {
-        "dynamic": true,
-        "enabled": false
-      },
-      "default_type": "_default",
-      "docvalues_dynamic": false,
-      "index_dynamic": true,
-      "store_dynamic": false,
-      "type_field": "_type",
-      "types": {
-        "airline": {
-          "dynamic": true,
-          "enabled": true
-        },
-        "airport": {
-          "dynamic": true,
-          "enabled": true
-        }
-      }
-    },
-    "store": {
-      "indexType": "scorch",
-      "segmentVersion": 15
-    }
-  },
-  "sourceType": "gocbcore",
-  "sourceName": "travel-sample",
-  "sourceParams": {},
-  "planParams": {
-    "indexPartitions": 1
-  }
-}
-----
-
-So setting this as the index definition would index all attributes of documents with “airline” or "airport" in its document IDs.
-
-image::fts-type-mapping-regexp-with-docid.png[,550,align=left]
-
-Note: The golang regexp support is based on 
-xref:https://github.com/google/re2/wiki/Syntax[Access the github link] 
diff --git a/modules/install/pages/amazon-linux2-install.adoc b/modules/install/pages/amazon-linux2-install.adoc
index 5762ec8f79..d2d47a71d5 100644
--- a/modules/install/pages/amazon-linux2-install.adoc
+++ b/modules/install/pages/amazon-linux2-install.adoc
@@ -1,7 +1,7 @@
 = Install Couchbase Server on Amazon Linux 2
 :description: Couchbase Server can be installed on Amazon Linux 2 for production and development use-cases. \
 Root and non-root installations are supported.
-:page-edition: enterprise
+:page-edition: Enterprise Edition
 :tabs:
 
 [abstract]
diff --git a/modules/install/pages/thp-disable.adoc b/modules/install/pages/thp-disable.adoc
index 9223c4e671..c5cce77ac0 100644
--- a/modules/install/pages/thp-disable.adoc
+++ b/modules/install/pages/thp-disable.adoc
@@ -14,6 +14,11 @@ However, THP is detrimental to Couchbase's performance (as it is for nearly all
 
 You must disable THP on Linux systems to ensure the optimal performance of Couchbase Server.
 
+NOTE: If you are using Rocky Linux, then <>
+
+
+
+
 [#init-script]
 == Using Init Script
 
@@ -237,3 +242,66 @@ If THP is properly disabled, the output of both commands should be the following
 ----
 always madvise [never]
 ----
+
+[#using-thp-service]
+== Using a THP Service
+
+. Create a service file.
++
+[source, console]
+----
+vi /etc/systemd/system/disable-thp.service
+----
+
+. Add the service configuration details to the file and then save it.
++
+[source, console]
+----
+[Unit]
+Description=Disable Transparent Huge Pages (THP)
+DefaultDependencies=no
+After=sysinit.target local-fs.target
+Before=couchbase-server.service
+
+[Service]
+Type=oneshot
+ExecStart=/bin/sh -c 'echo never | tee /sys/kernel/mm/transparent_hugepage/enabled > /dev/null'
+ExecStart=/bin/sh -c 'echo never | tee /sys/kernel/mm/transparent_hugepage/defrag > /dev/null'
+
+[Install]
+WantedBy=basic.target
+----
+
+. Reload the `systemctl` files.
++
+[source, console]
+----
+sudo systemctl daemon-reload
+----
+
+. Start the service.
++
+[source, console]
+----
+sudo systemctl start disable-thp
+----
+
+. Ensure that the service will start whenever the system is rebooted.
++
+[source, console]
+----
+sudo systemctl enable disable-thp
+----
+
+[#verify-thp-service]
+== Verify THP  is Disabled
+
+Execute the following commands to ensure the service has disabled the THP.
+
+[source, console]
+----
+cat /sys/kernel/mm/transparent_hugepage/enabled
+cat /sys/kernel/mm/transparent_hugepage/defrag
+----
+
+
diff --git a/modules/install/pages/upgrade-feature-availability.adoc b/modules/install/pages/upgrade-feature-availability.adoc
index 0baca32647..e105d4b3cc 100644
--- a/modules/install/pages/upgrade-feature-availability.adoc
+++ b/modules/install/pages/upgrade-feature-availability.adoc
@@ -67,11 +67,11 @@ All nodes in the cluster should first be upgraded.
 All nodes in the cluster must be upgraded, before new Analytics-Service features can be used.
 
 | Query Service
-| The xref:tools:query-workbench.adoc#index-advisor[Index Advisor] and xref:settings:query-settings.adoc[Memory-Usage Quota Setting] are usable in mixed mode.
+| The xref:tools:query-workbench.adoc#index-advisor[Index Advisor] and xref:n1ql:n1ql-manage/query-settings.adoc[Memory-Usage Quota Setting] are usable in mixed mode.
 
 xref:n1ql:n1ql-language-reference/flex-indexes.adoc[Flex Indexes], though not a new feature, are usable in mixed mode only if the cluster being upgraded is at least version 6.5.
 
-The xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc[], xref:n1ql:n1ql-language-reference/transactions.adoc[], and xref:n1ql:n1ql-language-reference/userfun.adoc[] are not usable in mixed mode.
+The xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc[Cost-Based Optimizer for Queries], xref:n1ql:n1ql-language-reference/transactions.adoc[], and xref:n1ql:n1ql-language-reference/userfun.adoc[] are not usable in mixed mode.
 
 | xref:learn:clusters-and-availability/system-events.adoc[System Events]
 | System Events can only be used when every node in the cluster is running a version of Couchbase Server that is 7.1 or later.
diff --git a/modules/introduction/pages/whats-new.adoc b/modules/introduction/pages/whats-new.adoc
index 0d17f4dc03..14da19e71b 100644
--- a/modules/introduction/pages/whats-new.adoc
+++ b/modules/introduction/pages/whats-new.adoc
@@ -8,8 +8,15 @@
 
 For information about platform support changes, deprecation notifications, notable improvements, and fixed and known issues, refer to the xref:release-notes:relnotes.adoc[Release Notes].
 
+[#new-features-762]
+== New Features and Enhancements in 7.6.2
+
+The following new features are provided in this release.
+
+include::partial$new-features-76_2.adoc[]
+
 [#new-features]
-== New Features and Enhancements
+== New Features and Enhancements in 7.6.0
 
 The following new features are provided in this release.
 
diff --git a/modules/introduction/partials/dot-net-sdk-compat.adoc b/modules/introduction/partials/dot-net-sdk-compat.adoc
new file mode 100644
index 0000000000..f421f90b55
--- /dev/null
+++ b/modules/introduction/partials/dot-net-sdk-compat.adoc
@@ -0,0 +1,3 @@
+Use version 3.5.1 or later of the .NET SDK with Couchbase Server 7.6. 
+Earlier versions of this SDK have some compatibility issues.
+
diff --git a/modules/introduction/partials/new-features-76.adoc b/modules/introduction/partials/new-features-76.adoc
index 261c4da2c4..261d1ac8a8 100644
--- a/modules/introduction/partials/new-features-76.adoc
+++ b/modules/introduction/partials/new-features-76.adoc
@@ -45,6 +45,10 @@ See xref:rest-api:rest-auditing.adoc[Configure Auditing].
 * You can add one or more arbiter nodes to a cluster.
 include::learn:partial$arbiter-node-benefits.adoc[]
 
+* The `sampleBuckets/install` REST API method now returns a JSON object containing the list of tasks Couchbase Server started to load the buckets.
+In addition, the `/pools/default/tasks` REST API endpoint now takes an optional `taskId` parameter to view details about a sample bucket loading task.
+See xref:manage:manage-settings/install-sample-buckets.adoc#install-sample-buckets-with-the-rest-api[Install Sample Buckets with the REST API] for more information.
+
 === Backup and Restore
 
 * The Role-Based Access Control (RBAC) REST API has a new `backup` endpoint that lets you backup and restore user and user groups. See xref:rest-api:rbac.adoc#backup-and-restore-users-and-groups[Backup and Restore Users and Groups].
@@ -151,7 +155,7 @@ See xref:learn:clusters-and-availability/rebalance.adoc#index-rebalance-methods[
 * Couchbase Server 7.6 introduces Vector Search to enable AI integration, semantic search, and the RAG framework.
 A developer-friendly vector indexing engine exposes a vector database and search functionality.
 With Couchbase Vector Search, you can enable fast and highly accurate semantic search, ground LLM responses in relevant data to reduce hallucinations, and enhance or enable use cases like personalized searches in e-commerce and media & entertainment, product recommendations, fraud detection, and reverse image search.
-You can also enable full access to an AI ecosystem with a Langchain integration, the most popular open-source framework for LLM-driven applications.
+You can also enable full access to an AI ecosystem with a LangChain integration, the most popular open-source framework for LLM-driven applications.
 +
 A Vector Search database includes:
 +
@@ -160,7 +164,7 @@ A Vector Search database includes:
 ** Storage of raw Embedding Vectors in the Data Service in the documents themselves
 ** Querying Vector Indexes (REST and UI via a JSON object/fragment, Couchbase SDKs, and {sqlpp})
 ** {sqlpp}/N1QL integration
-** Third-party framework integration: Langchain (later Llamaindex + others)
+** Third-party framework integration: LangChain (later LlamaIndex + others)
 ** Full support for Replicas Partitions and file-based Rebalance
 
 NOTE: Vector Search is currently only supported on Couchbase Server 7.6.0 deployments running on Linux platforms.
@@ -207,7 +211,7 @@ See xref:n1ql:n1ql-language-reference/sequenceops.adoc[].
 See xref:n1ql:n1ql-language-reference/explainfunction.adoc[].
 
 * cbq shell additions.
-See xref:tools:cbq-shell.adoc[cbq]:
+See xref:n1ql:n1ql-intro/cbq.adoc[cbq]:
 
 ** The `-query_context` command line option.
 ** The `-advise` command line option.
@@ -222,19 +226,19 @@ See xref:n1ql:n1ql-language-reference/createcollection.adoc[].
 See xref:rest-api:rest-initialize-cluster.adoc[].
 
 * Named and positional parameters can now be prefixed by `$` or `@` in a query.
-See xref:settings:query-settings.adoc#section_srh_tlm_n1b[Named Parameters and Positional Parameters].
+See xref:n1ql:n1ql-manage/query-settings.adoc#section_srh_tlm_n1b[Named Parameters and Positional Parameters].
 
 * The `system:indexes` catalog now enables you to find the number of replicas configured for each index.
 See xref:n1ql:n1ql-intro/sysinfo.adoc#querying-indexes[Query Indexes].
 
 * The Query Service adds cluster-level and node-level parameters to limit the size of explain plans in the cache.
-See xref:settings:query-settings.adoc#queryPreparedLimit[queryPreparedLimit] and xref:settings:query-settings.adoc#prepared-limit[prepared-limit].
+See xref:n1ql:n1ql-manage/query-settings.adoc#queryPreparedLimit[queryPreparedLimit] and xref:n1ql:n1ql-manage/query-settings.adoc#prepared-limit[prepared-limit].
 
 * The Query Service adds support for sequential scans, controlled by RBAC, which enables querying without an index.
 See xref:learn:services-and-indexes/indexes/query-without-index.adoc[].
 
 * The node-level N1QL Feature Control parameter now accepts hexadecimal strings or decimal integers.
-See xref:settings:query-settings.adoc#n1ql-feat-ctrl[n1ql-feat-ctrl].
+See xref:n1ql:n1ql-manage/query-settings.adoc#n1ql-feat-ctrl[n1ql-feat-ctrl].
 
 * Queries can now read from replica vBuckets when active vBuckets are inaccessible.
 The Query service adds new cluster-level, node-level, and request-level parameters to configure this feature.
@@ -244,7 +248,7 @@ See xref:manage:manage-settings/general-settings.adoc#query-settings[Query Setti
 See xref:n1ql:n1ql-language-reference/createfunction.adoc#sql-managed-user-defined-functions[{sqlpp} Managed User-Defined Functions].
 
 * When a query executes a user-defined function, profiling information is now available for any queries within the UDF.
-See xref:manage:monitor/monitoring-n1ql-query.adoc[].
+See xref:n1ql:n1ql-manage/monitoring-n1ql-query.adoc[].
 
 * The Query service collects statistics for the cost-based optimizer automatically when an index is created or built.
 See xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc[].
@@ -291,3 +295,10 @@ See xref:install:upgrade.adoc[] for more information.
 
 * You can no longer set the `sendStats` to `false` in Couchbase Server Community Edition clusters.
 You can still set `sendStats` to `false` on Couchbase Server Enterprise Edition clusters.
+
+=== .NET SDK Compatibility
+
+include::partial$dot-net-sdk-compat.adoc[]
+
+
+
diff --git a/modules/introduction/partials/new-features-76_2.adoc b/modules/introduction/partials/new-features-76_2.adoc
new file mode 100644
index 0000000000..d7dcb6445d
--- /dev/null
+++ b/modules/introduction/partials/new-features-76_2.adoc
@@ -0,0 +1,11 @@
+[#backup_762]
+=== Backup
+
+* Users with the xref:learn:security/roles.adoc#read-only-admin[Read-Only Admin] role can now read backup information from the following Backup Service REST API endpoints:
+
+** xref:rest-api:backup-get-cluster-info.adoc[`/api/v1/cluster/self`]
+** xref:rest-api:backup-manage-config.adoc[`/api/v1/config`]
+** xref:rest-api:backup-get-repository-info.adoc[`/api/v1/cluster/self/repository/{repo-state}`]
+** xref:rest-api:backup-get-task-info.adoc[`/api/v1/cluster/self/repository/{repo-state}/{task-name}/taskHistory`]
+** xref:rest-api:backup-get-plan-info.adoc[`/api/v1/plan/`]
+
diff --git a/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc b/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc
index 4df1243b7a..8e9f685e77 100644
--- a/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc
+++ b/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc
@@ -271,16 +271,26 @@ Detailed information is provided in xref:manage:manage-xdcr/monitor-xdcr-replica
 
 The following table indicates XDCR compatibility between different versions of Couchbase Enterprise Server, used for source and target clusters.
 
-[cols="6,3,3,3,3,3"]
+[cols="6,3,3,3,3,3,3"]
 |===
 | *Enterprise Server Version*
-| *7.2.0*
+| *7.6.x*
+| *7.2.x*
 | *7.1.4*, *7.1.3*, *7.1.2*, *7.1.1*
 | *7.1.0*
 | *7.0.x*
 | *6.6.x*
 
-| 7.2.0
+| 7.6.x
+| ✓
+| ✓
+| ✓
+| ✓
+| ✓
+| ✓
+
+| 7.2.x
+| ✓
 | ✓
 | ✓
 | ✓
@@ -293,12 +303,14 @@ The following table indicates XDCR compatibility between different versions of C
 | ✓
 | ✓
 | ✓
+| ✓
 
 | 7.1.0
 | ✓
 | ✓
 | ✓
 | ✓
+| ✓
 | ❌
 
 | 7.0.x
@@ -307,10 +319,12 @@ The following table indicates XDCR compatibility between different versions of C
 | ✓
 | ✓
 | ✓
+| ✓
 
 | 6.6.x
 | ✓
 | ✓
+| ✓
 | ❌
 | ✓
 | ✓
diff --git a/modules/learn/pages/data/transactions.adoc b/modules/learn/pages/data/transactions.adoc
index 84a5e72b30..e8d7ba82bd 100644
--- a/modules/learn/pages/data/transactions.adoc
+++ b/modules/learn/pages/data/transactions.adoc
@@ -47,7 +47,7 @@ Transaction APIs support:
 
 Multiple Key-Value and Query DML statements can be used together inside a transaction.
 
-When query DML statements are used within a transaction, xref:settings:query-settings.adoc#transactional-scan-consistency[request_plus] semantics are automatically used to ensure all updates done (and committed if done in a transaction) before the start of the transaction are visible to the query statements within it.
+When query DML statements are used within a transaction, xref:n1ql:n1ql-manage/query-settings.adoc#transactional-scan-consistency[request_plus] semantics are automatically used to ensure all updates done (and committed if done in a transaction) before the start of the transaction are visible to the query statements within it.
 
 == Using Transactions
 
@@ -297,7 +297,7 @@ Transactions can be configured using a number of settings and request-level para
 |See xref:java-sdk:howtos:distributed-acid-transactions-from-the-sdk.adoc#configuration[Setting Durability Level]
 
 |Scan consistency
-|xref:settings:query-settings.adoc#transactional-scan-consistency[Transactional Scan Consistency]
+|xref:n1ql:n1ql-manage/query-settings.adoc#transactional-scan-consistency[Transactional Scan Consistency]
 
 |Request-level Query parameters
 |Request-level parameters when using queries within transactions. See xref:n1ql:n1ql-language-reference/transactions.adoc#settings-and-parameters[{sqlpp} Transactions Settings] for details.
@@ -310,14 +310,14 @@ For more information, see xref:java-sdk:howtos:distributed-acid-transactions-fro
 
 |tximplicit
 |Specifies that a DML statement is a singleton transaction. By default, it is set to false.
-See xref:settings:query-settings.adoc#tximplicit[tximplicit] for details.
+See xref:n1ql:n1ql-manage/query-settings.adoc#tximplicit[tximplicit] for details.
 
 |kvtimeout
-|Specifies the maximum time to wait for a KV operation before timing out. The default value is 2.5s. See xref:settings:query-settings.adoc#kvtimeout[kvtimeout] for details.
+|Specifies the maximum time to wait for a KV operation before timing out. The default value is 2.5s. See xref:n1ql:n1ql-manage/query-settings.adoc#kvtimeout[kvtimeout] for details.
 
 |atrcollection
 |Specifies the collection where the active transaction records (ATRs) and client records are stored. The collection must be present. If not specified, the ATR is stored in the default collection in the default scope in the bucket containing the first mutated document within the transaction. See
-xref:settings:query-settings.adoc#atrcollection_req[atrcollection] for details.
+xref:n1ql:n1ql-manage/query-settings.adoc#atrcollection_req[atrcollection] for details.
 |===
 
 == Related Topics
diff --git a/modules/learn/pages/security/roles.adoc b/modules/learn/pages/security/roles.adoc
index 56fd02c248..3f89c7c1b2 100644
--- a/modules/learn/pages/security/roles.adoc
+++ b/modules/learn/pages/security/roles.adoc
@@ -1,5 +1,5 @@
 = Roles
-:description: pass:q[A Couchbase _role_ permits one or more _resources_ to be accessed according to defined _privileges_.]
+:description: pass:q[A Couchbase role permits one or more resources to be accessed according to defined privileges.]
 :page-aliases: security:security-roles,security:concepts-rba,security:concepts-rba-for-apps,security:rbac-ro-user,learn:security/resources-under-access-control,security:security-resources-under-access-control
 
 [abstract]
@@ -8,27 +8,27 @@
 [#roles-and-privileges]
 == Roles and Privileges
 
-Couchbase _roles_ each have a fixed association with a set of one or more privileges.
+Couchbase roles each have a fixed association with a set of one or more privileges.
 Each privilege is associated with a resource.
 Privileges are actions such as *Read*, *Write*, *Execute*, *Manage*, *Flush*, and *List*; or a combination of some or all of these.
 
 Roles are of the following kinds:
 
-* _Administative_: Associated with cluster-wide privileges.
+* Administative: Associated with cluster-wide privileges.
 Some of these roles are for administrators; who might manage cluster-configurations; or read statistics; or enforce security.
 Others are for users and user-defined applications that require access to specific, cluster-wide resources.
 
-* _Bucket_: Associated with bucket administration, collection management, and application access.
+* Bucket: Associated with bucket administration, collection management, and application access.
 Roles in this category can each be applied to one, to multiple, or to all buckets on the cluster.
 
-* _Data_, _Views_, and _XDCR_: Associated with the Data Service.
-This includes the reading, writing, monitoring, backing-up, and restoring of data; the administration of Views; and the administration of _Cross Data-Center Replication_ (XDCR).
+* Data, Views, and XDCR: Associated with the Data Service.
+This includes the reading, writing, monitoring, backing-up, and restoring of data; the administration of Views; and the administration of Cross Data-Center Replication (XDCR).
 
-* _Other Services_: Roles for the administration of services other than the Data Service.
-These roles are organized under the following categories: _Query & Index_, _Search_, _Analytics_, and _Backup_.
-(_Eventing_ administration is covered within the _Administrative_ category.)
+* Other Services: Roles for the administration of services other than the Data Service.
+These roles are organized under the following categories: Query & Index, Search, Analytics, and Backup.
+(Eventing administration is covered within the Administrative category.)
 
-* _Mobile_: Associated with the administration of _Sync Gateway_.
+* Mobile: Associated with the administration of Sync Gateway.
 
 When a user (meaning either an administrator or an application) attempts to access a resource, they must authenticate.
 The roles and privileges associated with the user-credentials thereby presented are checked by Couchbase Server.
@@ -40,21 +40,21 @@ If the associated roles contain privileges that support the kind of access that
 All data within a bucket is contained within some collection, within some scope.
 Permissions conveyed by bucket-related roles may be restricted in any of the following ways:
 
-* By _Bucket_: Permissions apply to all data in the specified bucket: all scopes and collections are thus covered by the permissions.
+* By Bucket: Permissions apply to all data in the specified bucket: all scopes and collections are thus covered by the permissions.
 
-* By _Bucket_ and _Scope_: Permissions apply only to the collections within the specified scope (or scopes), within the specified bucket.
+* By Bucket and Scope: Permissions apply only to the collections within the specified scope (or scopes), within the specified bucket.
 
-* By _Bucket_, _Scope_, and _Collection_: Permissions apply only to the data within the specified collection (or collections), within the specified scope (or scopes), within the specified bucket.
+* By Bucket, Scope, and Collection: Permissions apply only to the data within the specified collection (or collections), within the specified scope (or scopes), within the specified bucket.
 
 For detailed information on scopes and collections, see xref:learn:data/scopes-and-collections.adoc[Scopes and Collections].
 
 [#commonly-used-roles]
 === Commonly Used Roles
 
-Couchbase Server _users_ can largely be categorized as _administrators_, _developers_, and _applications_.
+Couchbase Server users can largely be categorized as administrators, developers, and applications.
 Each user-category is supported by a different subset of roles.
 
-* _Administrators_.
+* Administrators.
 Able to log into Couchbase Web Console and perform administrative tasks; but unable to read or write data.
 +
 The administrative tasks available are divided into multiple `admin` roles.
@@ -62,19 +62,19 @@ For example, the *Cluster Admin* role allows the management of all cluster featu
 See the *Admin* roles listed below for full details.
 Note that depending on the administrator's assigned roles, the content of Couchbase Web Console changes: for example, the entire *Security* screen is only visible to *Full Admin* administrators; and to administrators who possess both the *Local User Security Admin* and the *External User Security Admin* roles.
 
-* _Applications_.
+* Applications.
 Able to read or write data; but unable to log into Couchbase Web Console, or in any way modify cluster-settings.
 For example, the *Data Reader* and *Data Writer* roles allows data to be respectively read and written to one or more collections, within one or more scopes, within one or more buckets.
 Other application-intended roles are *Application Access*, *Data Writer*, *Data Backup & Restore*, and *Data Monitor*.
 See below for details on each.
 
-* _Developers_.
+* Developers.
 Can be given a selection of roles, allowing the right degree of data and console access.
 For example, the *Read-Only Admin* role allows the reading of cluster-statistics, while the *Data Read* and *Data Write* roles allow access to data on one or more buckets.
 
-The following list contains all roles supported by Couchbase Server, _Enterprise Edition_.
+The following list contains all roles supported by Couchbase Server, Enterprise Edition.
 Each role is explained by means of a description and (in most cases) a table: the table lists the privileges in association with resources.
-The header of each table states the role's *name*, followed by its _alias name_ in parentheses: alias names are used in commands and queries.
+The header of each table states the role's *name*, followed by its alias name in parentheses: alias names are used in commands and queries.
 In each table-body, where a privilege is associated with a resource, this is indicated with a check-mark.
 Where a privilege is not associated with a resource (or where association would not be applicable), this is indicated with a cross.
 Resources not referred to in a particular table have no privileges associated with them in the context of the role being described.
@@ -88,15 +88,15 @@ See xref:manage:manage-logging/manage-logging.adoc[Manage Logging], for detailed
 [#full-admin]
 == Full Admin
 
-The *Full Admin* role (an _Administrative_ role) supports full access to all Couchbase-Server features and resources, including those of security.
+The *Full Admin* role (an Administrative role) supports full access to all Couchbase-Server features and resources, including those of security.
 The role allows access to Couchbase Web Console, and allows the reading and writing of bucket-data.
 
-This role is also available in Couchbase Server _Community Edition_.
+This role is also available in Couchbase Server Community Edition.
 
 [#cluster-admin]
 == Cluster Admin
 
-The *Cluster Admin* role (an _Administrative_ role) allows the management of all cluster features except security.
+The *Cluster Admin* role (an Administrative role) allows the management of all cluster features except security.
 The role allows access to Couchbase Web Console, but does not permit the writing of data.
 
 [#table_cluster_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -139,7 +139,7 @@ The role allows access to Couchbase Web Console, but does not permit the writing
 [#local-user-security-admin]
 == Local User Security Admin
 
-The *Local User Security Admin* role (an _Administrative_ role) allows the management of local user roles and the reading of all cluster statistics.
+The *Local User Security Admin* role (an Administrative role) allows the management of local user roles and the reading of all cluster statistics.
 The role does not permit the granting of the *Full Admin*, the *Read-Only Admin*, the *Local User Security Admin*, or the *External User Security Admin* role; and does not permit the administrator to change their own role (which therefore remains *Local User Security Admin*).
 The role supports access to Couchbase Web Console, but does not support the reading of data.
 
@@ -183,7 +183,7 @@ The role supports access to Couchbase Web Console, but does not support the read
 [#external-user-security-admin]
 == External User Security Admin
 
-The *External User Security Admin* role (an _Administrative_ role) allows the management of external user roles and the reading of all cluster statistics.
+The *External User Security Admin* role (an Administrative role) allows the management of external user roles and the reading of all cluster statistics.
 The role does not permit the granting of the *Full Admin*, the *Read-Only Admin*, the *Local User Security Admin*, or the *External User Security Admin* role; and does not permit the administrator to change their own role (which therefore remains *External User Security Admin*).
 The role supports access to Couchbase Web Console, but does not support the reading of data.
 
@@ -227,10 +227,12 @@ The role supports access to Couchbase Web Console, but does not support the read
 [#read-only-admin]
 == Read-Only Admin
 
-The *Read-Only Admin* role (an _Administrative_ role) supports the reading of Couchbase Server-statistics: this includes registered usernames with roles and authentication domains, but excludes passwords.
-The role allows access to Couchbase Web Console.
+The *Read-Only Admin* role (an Administrative role) supports the reading of Couchbase Server statistics. 
+This information includes registered usernames with roles and authentication domains, but excludes passwords.
+Users with this role can also read Backup Service data to monitor backup plans and tasks.
+The role allows access to Couchbase Server Web Console.
 
-This role is also available in Couchbase Server _Community Edition_.
+This role is also available in Couchbase Server Community Edition.
 
 [#table_read_only_admin_role,cols="15,8,8,8,8",hrows=3]
 |===
@@ -267,12 +269,19 @@ This role is also available in Couchbase Server _Community Edition_.
 ^| image:introduction/no.png[]
 ^| image:introduction/no.png[]
 ^| image:introduction/no.png[]
+
+^| Backup Service (tasks and plans)
+^| image:introduction/yes.png[]
+^| image:introduction/no.png[]
+^| image:introduction/no.png[]
+^| image:introduction/no.png[]
+
 |===
 
 [#external-stats-reader]
 == External Stats Reader
 
-The *External Stats Reader* role (an _Administrative_ role) grants access to the `/metrics` and `/prometheus_sd_config` endpoints for _Prometheus_ integration.
+The *External Stats Reader* role (an Administrative role) grants access to the `/metrics` and `/prometheus_sd_config` endpoints for Prometheus integration.
 All statistics for all services can be read.
 The role does not allow access to Couchbase Web Console.
 
@@ -298,7 +307,7 @@ The role does not allow access to Couchbase Web Console.
 [#xdcr-admin]
 == XDCR Admin
 
-The *XDCR Admin* role (an _XDCR_ role) allows use of XDCR features, to create cluster references and replication streams.
+The *XDCR Admin* role (an XDCR role) allows use of XDCR features, to create cluster references and replication streams.
 The role allows access to Couchbase Web Console and allows the reading of data.
 
 [#table_xdcr_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -353,7 +362,7 @@ The role allows access to Couchbase Web Console and allows the reading of data.
 [#query-curl-access]
 == Query Curl Access
 
-The *Query Curl Access* role (a _Query & Index_ role) allows the {sqlpp} CURL function to be executed by an externally authenticated user.
+The *Query Curl Access* role (a Query & Index role) allows the {sqlpp} CURL function to be executed by an externally authenticated user.
 The user can access Couchbase Web Console, but cannot read data, other than that returned by the {sqlpp} CURL function.
 
 Note that the *Query Curl Access* role should be assigned with caution, since it entails risk: CURL runs within the local Couchbase Server network; therefore, the assignee of the *Query Curl Access* role is permitted to run GET and POST requests on the internal network, while being themselves externally located.
@@ -402,7 +411,7 @@ In versions of Couchbase Server prior to 5.5, this role was referred to as *Quer
 [#query-system-catalog]
 == Query System Catalog
 
-The *Query System Catalog* role (a _Query & Index_ role) allows information to be looked up by means of {sqlpp} in the system catalog: this includes `system:indexes`, `system:prepareds`, and tables listing current and past queries.
+The *Query System Catalog* role (a Query & Index role) allows information to be looked up by means of {sqlpp} in the system catalog: this includes `system:indexes`, `system:prepareds`, and tables listing current and past queries.
 This role is designed for troubleshooters, who need to debug queries.
 The role allows access to Couchbase Web Console, but does not permit the reading of bucket-items.
 
@@ -458,7 +467,7 @@ The role allows access to Couchbase Web Console, but does not permit the reading
 [#manage-global-functions]
 == Manage Global Functions
 
-The *Manage Global Functions* role (a _Query & Index_ role) allows global {sqlpp} functions to be managed.
+The *Manage Global Functions* role (a Query & Index role) allows global {sqlpp} functions to be managed.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_manage_global_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -495,7 +504,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#execute-global-functions]
 == Execute Global Functions
 
-The *Execute Global Functions* role (a _Query & Index_ role) allows global {sqlpp} functions to be executed.
+The *Execute Global Functions* role (a Query & Index role) allows global {sqlpp} functions to be executed.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_query_execute_global_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -532,7 +541,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#manage-scope-functions]
 == Manage Scope Functions (Query and Index)
 
-The *Manage Scope Functions* role (a _Query & Index_ role) allows {sqlpp} and _user defined_ functions to be managed for a given scope, given corresponding specification of _bucket_.
+The *Manage Scope Functions* role (a Query & Index role) allows {sqlpp} and user defined functions to be managed for a given scope, given corresponding specification of bucket.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_manage_scope_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -569,7 +578,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#execute-scope-functions]
 == Execute Scope Functions
 
-The *Execute Scope Functions* role (a _Query & Index_ role) allows {sqlpp} and _user defined_ functions to be executed for a given scope, given corresponding specification of _bucket_.
+The *Execute Scope Functions* role (a Query & Index role) allows {sqlpp} and user defined functions to be executed for a given scope, given corresponding specification of bucket.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_execute_scope_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -606,7 +615,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#manage-global-external-functions]
 == Manage Global External Functions
 
-The *Manage Global External Functions* role (a _Query & Index_ role) allows global external language functions to be managed.
+The *Manage Global External Functions* role (a Query & Index role) allows global external language functions to be managed.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_manage_global_external_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -643,7 +652,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#execute-global-external-functions]
 == Execute Global External Functions
 
-The *Execute Global External Functions* role (a _Query & Index_ role) allows global {sqlpp} functions to be executed.
+The *Execute Global External Functions* role (a Query & Index role) allows global {sqlpp} functions to be executed.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_execute_global_external_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -680,7 +689,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#manage-scope-external-functions]
 == Manage Scope External Functions
 
-The *Manage Scope External Functions* role (a _Query & Index_ role) allows external language functions to be managed for a given scope, given corresponding specification of _bucket_.
+The *Manage Scope External Functions* role (a Query & Index role) allows external language functions to be managed for a given scope, given corresponding specification of bucket.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_manage_external_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -717,7 +726,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#execute-scope-external-functions]
 == Execute Scope External Functions
 
-The *Execute Scope External Functions* role (a _Query & Index_ role) allows external language functions to be executed for a given scope, given corresponding specification of _bucket_.
+The *Execute Scope External Functions* role (a Query & Index role) allows external language functions to be executed for a given scope, given corresponding specification of bucket.
 The user can access Couchbase Web Console, but cannot read data.
 
 [#table_execute_external_functions_role,cols="15,8,8,8,8",hrows=3]
@@ -754,7 +763,7 @@ The user can access Couchbase Web Console, but cannot read data.
 [#analytics-reader]
 == Analytics Reader
 
-The *Analytics Reader* role (an _Analytics_ role) allows querying of shadow data-sets.
+The *Analytics Reader* role (an Analytics role) allows querying of shadow data-sets.
 The role allows access to Couchbase Web Console, and permits the reading of data.
 
 [#table_analytics_reader_role,cols="15,8,8,8,8",hrows=3]
@@ -791,7 +800,7 @@ The role allows access to Couchbase Web Console, and permits the reading of data
 [#analytics-admin]
 == Analytics Admin
 
-The *Analytics Admin* role (an _Analytics_ role) allows management of dataverses; management of all Analytics Service links; and management of all datasets.
+The *Analytics Admin* role (an Analytics role) allows management of dataverses; management of all Analytics Service links; and management of all datasets.
 The role allows access to Couchbase Web Console, but does not permit the reading of data.
 
 [#table_analytics_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -840,7 +849,7 @@ The role allows access to Couchbase Web Console, but does not permit the reading
 [#bucket-admin]
 == Bucket Admin
 
-The *Bucket Admin* role (which is a _Bucket_ role) allows the management of all _per bucket_ features (including starting and stopping XDCR).
+The *Bucket Admin* role (which is a Bucket role) allows the management of all per bucket features (including starting and stopping XDCR).
 The role allows access to Couchbase Web Console, but does not permit the reading or writing of data.
 
 [#table_bucket_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -889,7 +898,7 @@ The role allows access to Couchbase Web Console, but does not permit the reading
 [#manage-scopes]
 == Manage Scopes
 
-The *Manage Scopes* role (a _Bucket_ role) allows the creation and deletion of scopes, and the creation and deletion of collections _per scope_, given the corresponding specification of _bucket_.
+The *Manage Scopes* role (a Bucket role) allows the creation and deletion of scopes, and the creation and deletion of collections per scope, given the corresponding specification of bucket.
 The role allows no access to data, and does not permit access to Couchbase Web Console.
 The role is intended for application use only.
 
@@ -921,12 +930,12 @@ The role is intended for application use only.
 [#application-access]
 == Application Access
 
-The *Application Access* role (a _Bucket_ role) provides read and write access to data, _per bucket_.
+The *Application Access* role (a Bucket role) provides read and write access to data, per bucket.
 The role does not allow access to Couchbase Web Console: it is intended for applications, rather than users.
-Note that this role is also available in the _Community Edition_ of Couchbase Server.
+Note that this role is also available in the Community Edition of Couchbase Server.
 
 The role is provided in support of buckets that were created on versions of Couchbase Server prior to 5.0.
-Such buckets were accessed by specifying _bucket-name_ and _bucket-password_: however, bucket-passwords are not recognized by Couchbase Server 5.0 and after.
+Such buckets were accessed by specifying bucket-name and bucket-password: however, bucket-passwords are not recognized by Couchbase Server 5.0 and after.
 Therefore, for each pre-existing bucket, the upgrade-process for 5.0 and after creates a new user, whose username is identical to the bucket-name; and whose password is identical to the former bucket-password, if one existed.
 If no bucket-password existed, the user is created with no password.
 This migration-process allows the same name-combination as before to be used in authentication.
@@ -994,7 +1003,7 @@ Note that in versions of Couchbase Server prior to 5.5, this role was referred t
 [#xdcr-inbound]
 == XDCR Inbound
 
-The *XDCR Inbound* role (which is an _XDCR_ role) allows the creation of inbound XDCR streams, _per bucket_.
+The *XDCR Inbound* role (which is an XDCR role) allows the creation of inbound XDCR streams, per bucket.
 It does not allow access to Couchbase Web Console, and does not permit the reading of data.
 
 In versions of Couchbase Server prior to 5.5, this role was referred to as *Replication Target*.
@@ -1039,7 +1048,7 @@ In versions of Couchbase Server prior to 5.5, this role was referred to as *Repl
 [#sync-gateway]
 == Sync Gateway
 
-The *Sync Gateway* role (which is a _Mobile_ role) allows full access to data _per bucket_, as required by Sync Gateway.
+The *Sync Gateway* role (which is a Mobile role) allows full access to data per bucket, as required by Sync Gateway.
 The role does not allow access to Couchbase Web Console.
 The user can, by means of Sync Gateway, read and write data, manage indexes and views, and read some cluster information.
 
@@ -1119,7 +1128,7 @@ The user can, by means of Sync Gateway, read and write data, manage indexes and
 [#sync-gateway-configurator]
 == Sync Gateway Architect
 
-The *Sync Gateway Architect* role (which is a _Mobile_ role) allows management of Sync Gateway databases; and of Sync Gateway users and roles; and allows access to Sync Gateway's `/metrics` endpoint.
+The *Sync Gateway Architect* role (which is a Mobile role) allows management of Sync Gateway databases; and of Sync Gateway users and roles; and allows access to Sync Gateway's `/metrics` endpoint.
 The role does not allow access to Couchbase Web Console; and does not allow reading of application data.
 For information on Sync Gateway users and roles, see http://docs.couchbase.com/sync-gateway/3.0/access-control-concepts.html[Access Control Concepts^].
 
@@ -1163,7 +1172,7 @@ For information on Sync Gateway users and roles, see http://docs.couchbase.com/s
 [#sync-gateway-app]
 == Sync Gateway Application
 
-The *Sync Gateway Application* role (which is a _Mobile_ role) allows management of Sync Gateway users and roles; and allows application data to be read and written through Sync Gateway.
+The *Sync Gateway Application* role (which is a Mobile role) allows management of Sync Gateway users and roles; and allows application data to be read and written through Sync Gateway.
 The role does not allow access to Couchbase Web Console.
 For information on Sync Gateway users and roles, see http://docs.couchbase.com/sync-gateway/3.0/access-control-concepts.html[Access Control Concepts^].
 
@@ -1201,7 +1210,7 @@ For information on Sync Gateway users and roles, see http://docs.couchbase.com/s
 [#sync-gateway-application-read-only]
 == Sync Gateway Application Read Only
 
-The *Sync Gateway Application Read Only* role (which is a _Mobile_ role) allows reading of Sync Gateway users and roles; and allows application data to be read through Sync Gateway.
+The *Sync Gateway Application Read Only* role (which is a Mobile role) allows reading of Sync Gateway users and roles; and allows application data to be read through Sync Gateway.
 The role does not allow access to Couchbase Web Console.
 For information on Sync Gateway users and roles, see http://docs.couchbase.com/sync-gateway/3.0/access-control-concepts.html[Access Control Concepts^].
 
@@ -1239,7 +1248,7 @@ For information on Sync Gateway users and roles, see http://docs.couchbase.com/s
 [#sync-gateway-replicator]
 == Sync Gateway Replicator
 
-The *Sync Gateway Replicator* role (which is a _Mobile_ role) allows management of Sync Gateway replications.
+The *Sync Gateway Replicator* role (which is a Mobile role) allows management of Sync Gateway replications.
 The role does not allow access to Couchbase Web Console.
 
 [#table_sync_gateway_replicator_role,cols="15,8,8,8,8",hrows=3]
@@ -1270,7 +1279,7 @@ The role does not allow access to Couchbase Web Console.
 [#sync-gateway-dev-ops]
 == Sync Gateway Dev Ops
 
-The *Sync Gateway Dev Ops* role (which is a _Mobile_ role) allows management of Sync Gateway node-level configuration; and allows access to Syn Gateway's `/metrics` endpoint, for Prometheus integration.
+The *Sync Gateway Dev Ops* role (which is a Mobile role) allows management of Sync Gateway node-level configuration; and allows access to Syn Gateway's `/metrics` endpoint, for Prometheus integration.
 The role does not allow access to Couchbase Web Console.
 
 [#table_sync_gateway_dev_ops_role,cols="15,8,8,8,8",hrows=3]
@@ -1307,8 +1316,8 @@ The role does not allow access to Couchbase Web Console.
 [#data-reader]
 == Data Reader
 
-The *Data Reader* role (which is a _Data_ role) allows data to be read _per collection_, given corresponding specifications for _bucket_ and _scope_.
-Note that the role does _not_ permit the running of {sqlpp} queries (such as SELECT) against data.
+The *Data Reader* role (which is a Data role) allows data to be read per collection, given corresponding specifications for bucket and scope.
+Note that the role does not permit the running of {sqlpp} queries (such as SELECT) against data.
 The role does not allow access to Couchbase Web Console: it is intended to support applications, rather than users.
 
 [#table_data_reader_role,cols="15,8,8,8,8",hrows=3]
@@ -1351,7 +1360,7 @@ The role does not allow access to Couchbase Web Console: it is intended to suppo
 [#data-writer]
 == Data Writer
 
-The *Data Writer* role (which is a _Data_ role) allows data to be written _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Data Writer* role (which is a Data role) allows data to be written per collection, given corresponding specifications for bucket and scope.
 The role does not allow access to Couchbase Web Console: it is intended to support applications, rather than users.
 
 [#table_data_writer_role,cols="15,8,8,8,8",hrows=3]
@@ -1388,7 +1397,7 @@ The role does not allow access to Couchbase Web Console: it is intended to suppo
 [#data-dcp-reader]
 == Data DCP Reader
 
-The *Data DCP Reader* role (which is a _Data_ role) allows DCP streams to be initiated _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Data DCP Reader* role (which is a Data role) allows DCP streams to be initiated per collection, given corresponding specifications for bucket and scope.
 The role does not allow access to Couchbase Web Console: it is intended to support applications, rather than users.
 The role does allow the reading of data.
 
@@ -1438,11 +1447,11 @@ The role does allow the reading of data.
 [#data-backup-and-restore]
 == Data Backup & Restore
 
-The *Data Backup & Restore* role (which is a _Data_ role) allows data to be backed up and restored, _per bucket_.
+The *Data Backup & Restore* role (which is a Data role) allows data to be backed up and restored, per bucket.
 The role supports the reading of data.
 The role does not allow access to Couchbase Web Console: it is intended to support applications, rather than users.
 
-The privileges represented in this table are, from left to right, _Read_, _Write_, _Execute_, _Manage_, _Select_, _Backup_, _Create_, _List_, and _Build_.
+The privileges represented in this table are, from left to right, Read, Write, Execute, Manage, Select, Backup, Create, List, and Build.
 
 [#table_data_backup_role,cols="8,3,3,3,3,3,3,3,3,3",hrows=3]
 |===
@@ -1575,7 +1584,7 @@ The privileges represented in this table are, from left to right, _Read_, _Write
 [#data-monitor]
 == Data Monitor
 
-The *Data Monitor* role (which is a _Data_ role) allows statistics to be read for a given _bucket_, _scope_, or _collection_.
+The *Data Monitor* role (which is a Data role) allows statistics to be read for a given bucket, scope, or collection.
 It does not allow access to Couchbase Web Console, and does not permit the reading of data.
 This role is intended to support application-access, rather than user-access.
 
@@ -1609,7 +1618,7 @@ In versions of Couchbase Server prior to 5.5, this role was referred to as *Data
 [#views-admin]
 == Views Admin
 
-The *Views Admin* role (which is a _Views_ role) allows the management of views, _per bucket_.
+The *Views Admin* role (which is a Views role) allows the management of views, per bucket.
 The role allows access to Couchbase Web Console.
 
 [#table_views_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -1670,7 +1679,7 @@ The role allows access to Couchbase Web Console.
 [#views-reader]
 == Views Reader
 
-The *Views Reader* role (which is an _Administrative_ role) allows data to be read from views, _per bucket_.
+The *Views Reader* role (which is an Administrative role) allows data to be read from views, per bucket.
 This role does not allow access to Couchbase Web Console, and is intended to support applications, rather than users.
 
 [#table_views_reader_role,cols="15,8,8,8,8",hrows=3]
@@ -1707,7 +1716,7 @@ This role does not allow access to Couchbase Web Console, and is intended to sup
 [#query-select]
 == Query Select
 
-The *Query Select* role (which is a _Query & Index_ role) allows the SELECT statement to be executed _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Query Select* role (which is a Query & Index role) allows the SELECT statement to be executed per collection, given corresponding specifications for bucket and scope.
 This role allows access to Couchbase Web Console; it also supports the reading of data, and of bucket settings.
 
 [#table_query_select_role,cols="15,8,8,8,8",hrows=3]
@@ -1756,7 +1765,7 @@ This role allows access to Couchbase Web Console; it also supports the reading o
 [#query-update]
 == Query Update
 
-The *Query Update* role (which is a _Query & Index_ role) allows the UPDATE statement to be executed _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Query Update* role (which is a Query & Index role) allows the UPDATE statement to be executed per collection, given corresponding specifications for bucket and scope.
 The role supports access to Couchbase Web Console, and allows the writing (but not the reading) of data.
 It allows the reading of bucket settings.
 
@@ -1806,7 +1815,7 @@ It allows the reading of bucket settings.
 [#query-insert]
 == Query Insert
 
-The *Query Insert* role (which is a _Query & Index_ role) allows the INSERT statement to be executed _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Query Insert* role (which is a Query & Index role) allows the INSERT statement to be executed per collection, given corresponding specifications for bucket and scope.
 The role supports access to Couchbase Web Console, and allows the writing (but not the reading) of data.
 It allows the reading of bucket settings.
 
@@ -1856,7 +1865,7 @@ It allows the reading of bucket settings.
 [#query-delete]
 == Query Delete
 
-The *Query Delete* role (which is a _Query & Index_ role) allows the DELETE statement to be executed _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Query Delete* role (which is a Query & Index role) allows the DELETE statement to be executed per collection, given corresponding specifications for bucket and scope.
 The role supports access to Couchbase Server Web Console, and allows the deletion of data.
 It allows the reading of bucket settings.
 
@@ -1960,7 +1969,7 @@ Administrators' queries automatically have permission to perform sequential scan
 [#query-manage-index]
 == Query Manage Index
 
-The *Query Manage Index* role (which is a _Query & Index_ role) allows indexes to be managed _per collection_, given corresponding specifications for _bucket_ and _scope_.
+The *Query Manage Index* role (which is a Query & Index role) allows indexes to be managed per collection, given corresponding specifications for bucket and scope.
 The role allows access to Couchbase Web Console, but does not permit the reading of data.
 
 [#table_query_manage_index_role,cols="15,8,8,8,8",hrows=3]
@@ -2015,7 +2024,7 @@ The role allows access to Couchbase Web Console, but does not permit the reading
 [#eventing-full-admin]
 == Eventing Full Admin
 
-The *Eventing Full Admin* role (which is an _Eventing_ role) allows creation and management of eventing functions.
+The *Eventing Full Admin* role (which is an Eventing role) allows creation and management of eventing functions.
 The role allows access to Couchbase Web Console.
 
 [#table_eventing_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -2064,7 +2073,7 @@ The role allows access to Couchbase Web Console.
 [#eventing-manage-functions]
 == Manage Scope Functions (Eventing)
 
-The *Manage Scope Functions* role (which is an _Eventing_ role) allows eventing functions for a given scope to be managed.
+The *Manage Scope Functions* role (which is an Eventing role) allows eventing functions for a given scope to be managed.
 The role allows access to Couchbase Web Console.
 
 [#table_eventing_manage_functions,cols="15,8,8,8,8",hrows=3]
@@ -2103,7 +2112,7 @@ The role allows access to Couchbase Web Console.
 [#backup-full-admin]
 == Backup Full Admin
 
-The *Backup Full Admin* role (which is a _Backup_ role) allows performance of backup-related tasks.
+The *Backup Full Admin* role (which is a Backup role) allows performance of backup-related tasks.
 The role allows access to Couchbase Web Console.
 
 [#table_backup_admin_role,cols="15,8,8,8,8",hrows=3]
@@ -2152,7 +2161,7 @@ The role allows access to Couchbase Web Console.
 [#search-admin]
 == Search Admin
 
-The *Search Admin* role (which is a _Search_ role) allows management of all features of the Search Service, _per bucket_.
+The *Search Admin* role (which is a Search role) allows management of all features of the Search Service, per bucket.
 The role allows access to Couchbase Web Console.
 
 In versions of Couchbase Server prior to 5.5, this role was referred to as *FTS Admin*.
@@ -2209,7 +2218,7 @@ In versions of Couchbase Server prior to 5.5, this role was referred to as *FTS
 [#search-reader]
 == Search Reader
 
-The role *Search Reader* (which is a _Search_ role) allows _Full Text Search_ indexes to be searched for _bucket_, _scope_, and _collection_.
+The role *Search Reader* (which is a Search role) allows Full Text Search indexes to be searched for bucket, scope, and collection.
 The role allows access to Couchbase Web Console, and supports the reading of data.
 
 In versions of Couchbase Server prior to 5.5, this role was referred to as *FTS Searcher*.
@@ -2254,7 +2263,7 @@ In versions of Couchbase Server prior to 5.5, this role was referred to as *FTS
 [#analytics-select]
 == Analytics Select
 
-The *Analytics Select* role (which is an _Analytics_ role) allows the querying of datasets  for _bucket_, _scope_. and _collection_.
+The *Analytics Select* role (which is an Analytics role) allows the querying of datasets  for bucket, scope. and collection.
 The role allows access to Couchbase Web Console, and permits the reading of some data.
 
 [#table_analytics_select_role,cols="15,8,8,8,8",hrows=3]
@@ -2291,7 +2300,7 @@ The role allows access to Couchbase Web Console, and permits the reading of some
 [#analytics-manager]
 == Analytics Manager
 
-The *Analytics Manager* role (which is an _Analytics_ role) allows the management and querying of datasets created _per bucket_; and the management of Analytics Service local links.
+The *Analytics Manager* role (which is an Analytics role) allows the management and querying of datasets created per bucket; and the management of Analytics Service local links.
 The role allows access to Couchbase Web Console, and permits the reading of some data.
 
 [#table_analytics_manager_role,cols="15,8,8,8,8",hrows=3]
diff --git a/modules/learn/pages/services-and-indexes/services/backup-service.adoc b/modules/learn/pages/services-and-indexes/services/backup-service.adoc
index 432612ad1d..0cafd855c5 100644
--- a/modules/learn/pages/services-and-indexes/services/backup-service.adoc
+++ b/modules/learn/pages/services-and-indexes/services/backup-service.adoc
@@ -1,5 +1,5 @@
 = Backup Service
-:description: pass:q[The Backup Service allows full and incremental data-backups to be scheduled, and also allows the scheduling of _merges_ of previously made data-backups.]
+:description: pass:q[The Backup Service schedules full and incremental data backups and merges of previous  data-backups.]
 
 [abstract]
 {description}
@@ -7,148 +7,198 @@
 [#backup-service-overview]
 == Overview
 
-The Backup Service supports the scheduling of full and incremental data backups, either for specific individual buckets, or for all buckets on the cluster.
-(Both Couchbase and Ephemeral buckets can be backed up).
-The Backup Service also allows the scheduling of _merges_ of previously made backups.
-Data to be backed up can also be selected by _service_: for example, the data for the _Data_ and _Index_ Services alone might be selected for backup, with no other service's data included.
+The Backup Service lets you schedule full and incremental data backups for individual buckets or for all buckets in the cluster.
+It supports backing up both Couchbase and Ephemeral buckets.
+The Backup Service also supports scheduled merges of previous backups.
+You can choose what data to back up by service.
+For example, you can choose to back up the data for just the Data and Index Services.
 
-The service — which is also referred to as _backup_ (Couchbase Backup Service) — can be configured and administered by means of the Couchbase Web Console UI, the CLI, or the REST API.
+You can configure and administer the Backup Service using the Couchbase Server Web Console, the command-line tools, or the REST API.
 
 [#backup-service-and-cbbackupmgr]
 == The Backup Service and cbbackupmgr
 
-The Backup Service's underlying backup tasks are performed by `cbbackupmgr`, which can also be used independently, on the command line, to perform backups and merges.
-The Backup Service and `cbbackupmgr` (when the latter is used independently) have the following, principal differences:
+The Backup Service uses the `cbbackupmgr` command-line tool to perform backups. 
+You can also directly use this tool to perform backups and merges.
+Either backup method lets you perform incremental backups and merge incremental backups to deduplicate data.
+They use the same backup archive structure.
+You can list the contents of backed up data and search for specific documents no matter how you back up the data.
 
-* Whereas the Backup Service allows backup, restore, and archiving to be configured for the local cluster, and also permits restore to be configured for a remote cluster; `cbbackupmgr` allows backup, restore, and archiving each to be configured either for the local or for a remote cluster.
+When choosing whether to use the Backup Service or to directly call `cbbackupmgr`, consider these differences between these methods:
 
-* Whereas `cbbackupmgr` allows backups, merges, and other related operations only to be executed individually, the Backup Service provides automated, recurrent execution of such operations.
+* The Backup Service backs up, restores, and archives buckets only on the cluster it runs on. 
+You can use `cbbackupmgr` to backup, restore, and archive buckets on either the local or a remote cluster.
 
-See xref:backup-restore:enterprise-backup-restore.adoc[cbbackupmgr], for more information.
+* The Backup Service lets you perform backup, restore, and archive tasks on a regular schedule.
+Calling `cbbackupmgr` runs a backup, restore, or archive task a single time.
+To use it on a regular schedule, you must rely on an external scheduling system such as `cron`.
 
-Note that both the Backup Service and `cbbackupmgr` allow _full_ and _incremental_ backups.
-Unlike the Backup Service, `cbbackupmgr` requires a new repository to be created for each new, full backup (successive `cbbackupmgr` backups to the same repository being incremental).
-Both allow incremental backups, once created, to be merged, and their data deduplicated.
-Both use the same backup archive structure; and allow the contents of backups to be listed, and specific documents to be searched for.
+See xref:backup-restore:enterprise-backup-restore.adoc[cbbackupmgr] for more information about using the command-line tool.
 
 [#backup-service-architecture]
-== Backup-Service Architecture
+== Backup Service Architecture
 
-The Backup Service has a _leader-follower_ architecture.
-This means that one of the cluster's Backup-Service nodes is elected by ns_server to be the _leader_; and is thereby made responsible for dispatching backup tasks; for handling the addition and removal of nodes from the Backup Service; for cleaning up orphaned tasks; and for ensuring that global storage-locations are accessible by all Backup-Service nodes.
+When there are multiple Backup Service nodes in the cluster, 
+the Cluster Manager elects one of them to be the leader.
+The leader is responsible for:
 
-If the _leader_ becomes unresponsive, or is lost due to failover, the Backup Service ceases operation; until a rebalance has been performed.
-During the course of this rebalance, ns_server elects a new leader, and the Backup Service resumes, using the surviving Backup-Service nodes.
+* Dispatching backup tasks.
+* Adding and removing nodes from the Backup Service.
+* Cleaning orphaned tasks.
+* Ensuring that all Backup Service nodes can reach the global storage locations.
+
+If the leader becomes unresponsive or fails over, the Backup Service stops until a rebalance takes place.
+During the rebalance, the Cluster Manager elects a new leader.
+The Backup Service then resumes running on the surviving Backup Service nodes.
 
 [#plans]
 == Plans
 
-The Backup Service is automated through the scheduling of _plans_, defined by the administrator.
+To automate backups using the Backup Service, you must create a plan that tells the service what you want it to do.  
 A plan contains the following information:
 
-* The data of which services is to be backed up.
+* The data to back up.
+
+* Where to store the backup.  
+You associate a plan with a repository where it stored backup data (see the next section).
 
-* The storage location of the backup. This can be either `filesystem` or `cloud` storage. +
-Selecting `cloud` storage for the backup location will require additional parameters such as the name of the bucket for storing the backup, and the access credentials.
+* The schedule for the Backup Service to run backup tasks.
 
-* The _schedule_ on which backups (or backups and merges) will be performed.
+* The type of backup to perform. 
+Backups can be full or incremental.
+In addition to just backing up data, a backup task can also merge backups.  
 
-* The type of task to be performed: this can either be _one or more backups_, or _one or more backups and one or more merges_.
-Backups can be _full_ or _incremental_.
 
 [#repositories]
 == Repositories
 
-A _repository_ is a location that contains backed up data.
-The location must be accessible to all nodes in the cluster, and must be assigned a name that is unique across the cluster.
-A repository is defined with reference either to _a specific bucket_, or to _all buckets_ in the cluster_.
-Data from each specified bucket will be backed up in the specified repository.
+A repository is a location where the Backup Service can store backup data.
+You associate a repository with a plan.
+You must set several options to define the repository, including:
+
+* Whether the repository is for all buckets, or a specific bucket.
+
+* Whether the repository is in `filesystem` or `cloud` storage. 
 
-A repository is defined with reference to a specific _plan_.
-Once repository-definition is completed, backups (or backups and merges) are performed of the data in the specified bucket (or buckets), with the data being saved in the repository on the schedule specified in the plan.
+* The repository's location--a path for filesystem repositories or the cloud provider details plus a local staging directory for cloud repositories.
+All nodes in the cluster must be able to access the repository location. 
+
+Once you define the repository, the Backup Service performs backups and optionally merges of the data in the bucket or buckets on the schedule in the plan.
+
+NOTE: The `cbbackupmgr` tool takes a lock on the repository to which it's backing up data. 
+This lock can cause Backup Service tasks to fail if they attempt to back up data to the repository. 
+If you see backup tasks failing due to lock issues, a common cause is that a `cbbackupmgr` task (either one started directory or by the Backup Service) is using the repository.
 
 [#inspecting-and-restoring]
 == Inspecting and Restoring
 
-The Backup Service allows inspection to be performed on the history of backups made to a specific repository.
-Plans can be created, reviewed and deleted.
-Individual documents can be searched for, in respositories.
+After the Backup Service has backed up data, you can inspect it in several ways.
+You can view the history of backups the Backup Service has performed in a repository.
+You can also search the repositories for individual documents that have been backed up.
 
-Data from individual or selected backups within a repository can be _restored_ to the cluster, to a specified bucket.
-Document keys and values can be _filtered_, to ensure that only a subset of the data is restored.
-Data may be restored to its original keyspace, or _mapped_ for restoration to a different keyspace.
+When restoring data from a backup, you can define filters to choose a subset of the data to restore. 
+You can restore data to its original keyspace or apply a mapping to restore it to a different keyspace.
 
 [#archiving-and-importing]
 == Archiving and Importing
 
-If a repository no longer needs to be _active_ (that is, with ongoing backups and merges continuing to occur), it can be _archived_: this means that the repository is still accessible, but no longer receives data backups.
-
-An archived repository can be _deleted_, so that the Backup Service no longer keeps track of it.
-Optionally, the data itself can be retained, on the local filesystem.
+If you no longer need a repository to perform backups, you can archive it. 
+You can still read the backed-up data in an archived repository.
+However, the Backup Service cannot perform further backups to the repository. 
 
-A deleted repository whose data still exists can be _imported_ back into the cluster, if required.
-Once imported, the repository can be _read_ from, but no longer receives data backups.
+If you delete a repository but do not delete the data it contains, you can import the data back into the cluster.
+After importing the data, you can read the data, but as with archived repositories, the Backup Service cannot write backups to it.
 
 [#avoiding-task-overlap]
 == Avoiding Task Overlap
 
-Although the Backup Service allows automated tasks to be scheduled at intervals as small as one minute, administrators are recommended typically not to lower the interval below fifteen minutes; and always to ensure that the interval is large enough to allow each scheduled task ample time to complete before the next is due to commence; even in the event of unanticipated network latency.
+The Backup Service allows you to schedule automated tasks at intervals as small as one minute.
+However, you should be cautious about using intervals under fifteen minutes.
+You must make sure the interval is large enough to allow each task enough time to finish before the next task is scheduled to start.
+
+Several conditions can cause a backup task to take longer than expected.
+Having many backups in the same repository can make the process of populating the backup's staging directory slower.
+Spikes in network latency can also cause a backup to take longer than usual.
+
+The Backup Service runs only a single task at a time.
+If another instance a task is scheduled to start while a previous instance is still running, the Backup Service refuses to start the new instance.
+Instead, the instance of the task fails to start.
+If a backup task is scheduled to start while a different task is already running, the Backup Service queues the new task until the existing task finishes.
+
+A backup task can also fail if the underlying `cbbackupmgr` process it calls to perform the backup fails. 
+When run directly or by a Backup Service task, the `cbbackupmgr` tool takes a lock on the repository into which it's  backing up data.
+This lock prevents any other instance of the `cbbackupmgr` tool to storing data into the repository.
+If the instance of `cbbackupmgr` started by a Backup Service task exits due to a lock on its repository, the backup task fails.
+
+For example, suppose you have a repository whose plan defines two tasks named TaskA and TaskB:
+
+* If a new instance of TaskA is scheduled to start while a prior instance of TaskA is still running, the Backup Service does not start the new instance of TaskA.
+
+* If there's a single Backup Service node and TaskB is scheduled to start while an instance of TaskA is still running, the Backup Service places TaskB in a queue until TaskA ends.
+
+* If TaskB is scheduled to start while an instance of TaskA is still running on a cluster with multiple Backup-Service nodes, TaskB fails.
+In this case, the Backup Service passes a new instance of TaskB to the Backup Service on a different node from the one that's running TaskA.
+This Backup Service node starts TaskB immediately.
+However, TaskA's instance of `cbbackupmgr` holds a lock on the repository.
+This lock prevents TaskB's `cbbackupmgr` process from getting a lock on the repository, causing it to fail.
 
-Each running task maintains a lock on its repository.
-Therefore, if, due to an interval-specification that is too small, one scheduled task attempts to start while another is still running, the new task cannot run.
+When a task fails to start,  the next successful backup task backs up the data it would have backed up.
 
-For example, given a repository whose plan defines two tasks, _TaskA_ and _TaskB_:
+== Choosing the Number of Backup Service Nodes
 
-* If a new instance of _TaskA_ is scheduled to start while a prior instance of _TaskA_ is still running, the new instance fails to start.
+As explained in the previous section, backup tasks can fail to start if tasks that are already running use the same repository. 
+You have several options to configure your cluster to avoid having backup tasks fail due to these conflicts.
 
-* If, on a cluster with a single Backup-Service node, a new instance of _TaskB_ is scheduled to start while an instance of _TaskA_ is still running, _TaskB_ is placed in a queue, and starts when _TaskA_ ends.
+The simplest option is to have a single Backup Service node.
+This configuration is useful if you have multiple backup tasks that target the same repository. 
+If one task is scheduled to start while another task is running, the Backup Service adds the scheduled task to a queue instead of causing it to fail.
+One drawback of this configuration is that it reduces resiliency. 
+If the single Backup Service node fails over, then there is no other Backup Service available to handle backups.
 
-* If, on a cluster with multiple Backup-Service nodes, a new instance of _TaskB_ is scheduled to start while an instance of _TaskA_ is still running, _TaskB_ is passed to a different node from the one that is running _TaskA_, but then fails to start.
+If you want greater resiliency for your backups, you can add multiple Backup Service nodes to the cluster.
+This increases the risk of having backup tasks fail due to overlap if backing up into the same repository.
 
-In cases where data cannot be backed up due to a task failing to start, the data will be backed up by the next successful running of the task.
+In either of these cases, you still need to schedule the tasks so that the same task does not overlap with itself.
 
 [#specifying-merge-offsets]
-== Specifying Merge Offsets
+== Setting Merge Offsets
 
-As described in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc#schedule-merges[Schedule Merges], the Backup Service allows a schedule to be established for the automated merging of backups that have been previously accomplished.
-This involves specifying a _window of past time_.
-The backups that will be merged by the scheduled process are those that fall within the specified window.
+As explained in the xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc#schedule-merges[Schedule Merges] section, the Backup Service lets you set a schedule for automatically merging previous backups. 
+To schedule merges, you define a past time range within which the Backup Service automatically merges backups.
 
-The window's placement and duration are determined by the specifying of two offsets.
-Each offset is an integer that refers to a day.
-The *merge_offset_start* integer indicates the day that contains the _start_ of the window.
-The *merge_offset_end* integer indicates the day that contains the _end_ of the window.
-Note that these offsets are each measured from a different point:
+You set this time range by specifying two offsets, each representing a number of days. 
+The `merge_offset_start` integer indicates the beginning of the time range and the  `merge_offset_end` indicates its end. 
 
-* The *merge_offset_start* integer is measured from the present day — the present day itself always being specified by the integer *0*.
+These are offsets from different points in time:
 
-* The *merge_offset_end* is measured from the specified *merge_offset_start*.
+* `merge_offset_start` is an offset from today, represented by the integer 0.
+For example, setting `merge_offset_start` to 90 means the start of the merge offset is 90 days ago from today.
+* `merge_offset_end` sets the number of days before the day you selected with `merge_offset_start`.
+For example, suppose you set `merge_offset_start` to 90 and set `merge_offset_end` to 30.
+Then the end of the offset is 120 days before today because 90 + 30 = 120.
 
-This is indicated by the following diagram, which includes two examples of how windows may be established:
+The following diagram shows two examples of settings offsets:
 
 image::services-and-indexes/services/mergeDiagram.png[,780,align=left]
 
-The diagram represents eight days, which are numbered from right to left; with the present day specified by the integer *0*, yesterday by *1*, the day before yesterday by *2*, and so on.
-(Note that the choice of eight days for this diagram is arbitrary: the Backup Service places no limit on integer-size when establishing a window.)
+In this diagram, days are numbered from right to left, with today as 0, yesterday as 1, the day before yesterday as 2, and so on. 
+The choice of eight days in the diagram is arbitrary.
+The Backup Service does not limit the size of the integer when setting the time range.
 
-Two examples of window-definition are provided.
-The first, _Example A_, shows a value for *merge_offset_start* of *0* — the integer *0* indicating the present day.
-Additionally, it shows a value for *merge_offset_end* of *3*; indicating that 3 days should be counted back from the present day.
+The diagram contains two examples: 
 
-Thus, if the present day is June 30th, the start of the window is on June 30th, and the end of the window on June 27th.
-Note that the end of the window occurs at the _start_ of the last day: this means that the whole of the last day is included in the window.
-Note also that when *0* is specified, the window starts on the present day at whatever time the scheduled merge process is run: therefore, if the process runs at 12:00 pm on the present day, only the first half of the present day is included in the window.
-All days that occur between the start day and the end day are wholly included.
+* Example A sets `merge_offset_start` to 0 (today) and `merge_offset_end` to 3 (three days before today). 
+If today is June 30, the time range is from June 30 to June 27. 
+The end of the range includes the entire last day.
+When you use 0 to indicate today,  the range starts from the time the scheduled merge process begins running.
 
-_Example B_ shows a value for *merge_offset_start* of *4*; which indicates 4 days before the present day.
-Additionally, it shows a value for *merge_offset_end* of *3*; indicating that 3 days should be counted back from the specified *merge_offset_start*.
-Thus, if the present day is March 15th, the start of the window is on March 11th, and the end of the window on March 8th.
-Note that when the start-day is _not_ the present day, the window starts at the end of that day: therefore, the whole of the start-day, the whole of the end-day, and the whole of each day in between are all included in the window.
+* Example B sets `merge_offset_start` to 4 (four days before today) and `merge_offset_end` to 3 (7 days ago, which is three days before the specified `merge_offset_start`). 
+Therefore, if today is March 15, the time range is from March 11 to March 8, with both the start and end days included entirely.
 
 [#see-also]
 == See Also
 
-For information on using the Backup Service by means of Couchbase Web Console, see xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
-For reference pages on the Backup Service REST API, see xref:rest-api:backup-rest-api.adoc[Backup Service API].
-For information on the port numbers used by the Backup Service, see xref:install:install-ports.adoc[Couchbase Server Ports].
-For a list of audit events used by the Backup Service, see xref:audit-event-reference:audit-event-reference.adoc[Audit Event Reference].
+* See xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore] to learn how to configure the Backup Service with the Couchbase Web Console.
+* See xref:rest-api:backup-rest-api.adoc[Backup Service API] for information about using the Backup Service from the REST API.
+* To learn about the port numbers the Backup Service uses, see xref:install:install-ports.adoc[Couchbase Server Ports].
+* For a list of Backup Service audit events, see xref:audit-event-reference:audit-event-reference.adoc[Audit Event Reference].
diff --git a/modules/manage/pages/import-documents/import-documents.adoc b/modules/manage/pages/import-documents/import-documents.adoc
index a00a231966..904bcf74c5 100644
--- a/modules/manage/pages/import-documents/import-documents.adoc
+++ b/modules/manage/pages/import-documents/import-documents.adoc
@@ -1,9 +1,14 @@
-= Import Documents
+= Import Documents with the Couchbase Web Console
+:imagesdir: ../../assets/images
+:page-pagination:
 :description: Couchbase Web Console provides a graphical interface for the importing of data, in both JSON and other formats.
+:escape-hatch: cloud:clusters:data-service/import-data-documents.adoc
 
 [abstract]
 {description}
 
+include::ROOT:partial$component-signpost.adoc[]
+
 [#importing-data]
 == Options for Importing Data
 
@@ -17,7 +22,7 @@ Data can be imported into Couchbase Server by means of the following:
 
 The *cbimport json* and *cbimport csv* command-line utilities should be used in preference to Couchbase Web Console whenever high-performance importing is required; and especially when the data-set to be imported is greater in size than 100 MB.
 
-For information on the *cbimport* command-line utilities, access the *cbimport* entry, in the *CLI Reference*, in the vertical navigation bar, to the left.
+For information on the *cbimport* command-line utilities, see xref:tools:cbimport.adoc[].
 The remainder of this page explains how to import data by means of Couchbase Web Console.
 Note the following prerequisites:
 
@@ -29,33 +34,33 @@ The procedures below assume that a bucket named `testBucket` has been created.
 * Before attempting to import data with Couchbase Web Console, ensure that the *Query Service* has been deployed on the cluster: data-import with Couchbase Web Console depends on this service.
 
 [#access-the-import-documents-panel]
-== Accessing the Import Documents Panel
+== Access the Import Documents Panel
 
 Access the *Import Documents* panel of Couchbase Web Console, as follows:
 
 . Left-click on the *Documents* tab, in the left-hand navigation bar:
 +
-image::import-documents/accessDocumentsTab.png[,120,align=left]
+image::import-documents/accessDocumentsTab.png["The Documents tab in the left-hand navigation bar",120]
 
 . When the *Documents* screen appears, select the *Import* tab, on the horizontal navigation bar, near the top:
 +
-image::import-documents/accessImportDocumentsTab.png[,280,align=left]
+image::import-documents/accessImportDocumentsTab.png["The Import tab",280]
 
 The *Import* panel is now displayed:
 
-image::import-documents/importDocumentsPanel.png[,720,align=left]
+image::import-documents/importDocumentsPanel.png["The Import panel",720]
 
 [#understanding-the-import-panel]
-== Understanding the Import Panel
+== Understand the Import Panel
 
 The *Import* panel displays the following interactive graphical elements:
 
 * *Select File to Import*.
 A button that, when left-clicked on, displays a file-selection interface.
 This allows the user to select a single file that contains the data to be imported.
-To the right of the button is a link — *file format details* — that, when hovered over with the mouse-cursor, provides a pop-up notification of acceptable file-formats:
+To the right of the button is a link -- *file format details* -- that, when hovered over with the mouse-cursor, provides a pop-up notification of acceptable file-formats:
 +
-image::import-documents/fileFormatDetails.png[,440,align=left]
+image::import-documents/fileFormatDetails.png["The file format details popup",440]
 +
 These file-formats are described in the subsections below.
 
@@ -69,7 +74,7 @@ However, when the user left-clicks on the *Select File to Import* button, Couchb
 Should automatic file-type recognition ever result in the display of an incorrect file-type, the control at the right-hand side of the field can be used, to display a pulldown menu; which allows user-selection of the correct file-type.
 The menu appears as follows:
 +
-image::import-documents/parseFileAsMenu.png[,180,align=left]
+image::import-documents/parseFileAsMenu.png["The Parse File As menu",180]
 +
 The options xref:manage:import-documents/import-documents.adoc#importing-csv-and-tsv-files[CSV], xref:manage:import-documents/import-documents.adoc#importing-csv-and-tsv-files[TSV], xref:manage:import-documents/import-documents.adoc#import-a-json-list[JSON List], and xref:manage:import-documents/import-documents.adoc#importing-json-lines[JSON Lines], are described in the subsections below.
 
@@ -79,7 +84,7 @@ Three pulldown menus, which respectively display all buckets available on the cl
 The selected bucket, scope, and collection are those into which data will be imported.
 For example:
 +
-image::import-documents/destinationBucketSelectTestBucket.png[,320,align=left]
+image::import-documents/destinationBucketSelectTestBucket.png["The Keyspace controls with the bucket menu displayed",320]
 
 * *Import With Document ID*.
 Two radio-buttons, which allow specification of how the _id_ of the newly imported document is to be determined.
@@ -107,7 +112,7 @@ Status on the operation is displayed immediately below the button.
 Note that if the operation takes a long time, the button's label is changed to *Cancel*; at which point, by left-clicking, the user can cancel the import operation.
 
 [#import-a-json-list]
-== Importing a JSON List
+== Import a JSON List
 
 To be imported, JSON documents must be specified in a file: the file itself must then specified as the target for import.
 Within the file, the documents can be specified in either of two ways: as a _list_, or as a series of _lines_.
@@ -132,32 +137,32 @@ Each element is a document, containing multiple key-value pairs.
 
 . Within the *Import* panel, left-click on the *Select File to Import* button:
 +
-image::import-documents/selectFileToImport.png[,340,align=left]
+image::import-documents/selectFileToImport.png["Select File to Import button",340]
 +
 The brings up the file-selection interface specific to the host operating system.
 Use this to select the file targeted for import.
 For example:
 +
-image::import-documents/fileSelectionInterface.png[,200,align=left]
+image::import-documents/fileSelectionInterface.png["Selecting list.json",200]
 +
 When the file `list.json` has been selected, the *Import Documents* panel appears as follows:
 +
-image::import-documents/importDocumentsWithInitialContent.png[,720,align=left]
+image::import-documents/importDocumentsWithInitialContent.png["Preview of list.json",720]
 +
 The filename `list.json` now appears immediately below the *Select File to Import* button.
 The *Parse File As* menu displays *JSON List*, indicating that Couchbase Server has correctly recognized the file type.
 To the right of the *Parse File As* field, the number of records found in the file is displayed.
 +
-Note that, under *Import With Document ID*, the *Value of Field* option has now become activated; and displays, as a default selection, a common _field_ it has encountered — which is `name`.
+Note that, under *Import With Document ID*, the *Value of Field* option has now become activated; and displays, as a default selection, a common _field_ it has encountered -- which is `name`.
 +
 Note also that the *cbimport* command-line display has changed, to incorporate the information so far entered by means of the user-interface.
 +
-The *File Contents* field now shows the file contents — by default, as a *Parsed Table*.
+The *File Contents* field now shows the file contents -- by default, as a *Parsed Table*.
 
 . Specify a destination bucket, using the *Destination Bucket* pulldown menu.
 In this case, `testBucket` is to be selected, with the `_default` scope and collection:
 +
-image::import-documents/destinationBucketSelectTestBucket.png[,320,align=left]
+image::import-documents/destinationBucketSelectTestBucket.png["The Keyspace controls, selecting testBucket",320]
 
 . Select a form of _id_ for the documents to be imported.
 The *Import With Document ID* field provides two radio buttons.
@@ -169,45 +174,45 @@ For this instance, leave the default selection, *UUID*, unchanged.
 Optionally, the *File Contents* can now be displayed in the available, alternative forms.
 To display `list.json` as unformatted JSON, left-click on the *Raw File* tab:
 +
-image::import-documents/rawFileTab.png[,190,align=left]
+image::import-documents/rawFileTab.png["Raw File tab",190]
 +
 The file `list.json` now appears, unformatted, in the *File Contents* panel:
 +
-image::import-documents/fileContentsRawFile.png[,600,align=left]
+image::import-documents/fileContentsRawFile.png["list.json as a raw file",600]
 +
 Alternatively, left-click on the *Parsed JSON* tab:
 +
-image::import-documents/parsedJSONTab.png[,190,align=left]
+image::import-documents/parsedJSONTab.png["Parsed JSON tab",190]
 +
 The *File Contents* pane now shows a parsed version of the file `list.json`, the initial section of which appears as follows:
 +
-image::import-documents/fileContentsAsParsedJSON.png[,600,align=left]
+image::import-documents/fileContentsAsParsedJSON.png["list.json as parsed JSON",600]
 
 . Import the file.
 Left-click on the *Import Data* button, located in the lower center area of the *Import Documents* panel.
 +
-image::import-documents/leftClickOnImportButton.png[,190,align=left]
+image::import-documents/leftClickOnImportButton.png["The Import Data button",190]
 +
 The documents in the specified file are now imported.
 If the operation is successful, a notification appears at the lower left of the console:
 +
-image::import-documents/importNotification.png[,260,align=left]
+image::import-documents/importNotification.png["Import notification successful",260]
 
 . Check the imported documents.
 Left-click on the *Workbench* tab, on the horizontal, upper navigation bar:
 +
-image::import-documents/leftClickOnWorkbenchTab.png[,250,align=left]
+image::import-documents/leftClickOnWorkbenchTab.png["The Workbench tab",250]
 +
 This brings up the *Edit* panel, which now appears as follows:
 +
-image::import-documents/documentEditorWithImportedDocuments.png[,720,align=left]
+image::import-documents/documentEditorWithImportedDocuments.png["The imported data in the Documents Workbench",720]
 +
 The five documents contained in the file `list.json` have been successfully imported.
 Each has been automatically assigned an id.
 The documents can now be inspected and edited, by means of the *Workbench*.
 
 [#importing-into-scopes-and-collections]
-=== Importing into Scopes and Collections
+=== Import into Scopes and Collections
 
 A _collection_ is a data container, defined on Couchbase Server, within a bucket whose type is either _Couchbase_ or _Ephemeral_.
 A _scope_ is a mechanism for the grouping of multiple collections.
@@ -221,20 +226,20 @@ In the previous example, the bucket *testBucket* was selected, but no change was
 However, administrator-created scopes and collections can be specified by means of the pulldown menus provided.
 For example, if the bucket *testBucket* contains a scope named *testScope*, within which is a collection named *testCollection*, these can be specified as follows:
 
-image::import-documents/specifyScopeAndCollection.png[,320,align=left]
+image::import-documents/specifyScopeAndCollection.png["The Keyspace controls, selecting testCollection",320]
 
 At this point, the generated command, displayed at the upper right of the *Import* panel, is as follows:
 
-image::import-documents/generatedCommandWithScopeAndCollection.png[,720,align=left]
+image::import-documents/generatedCommandWithScopeAndCollection.png["Generated cbimport command",720]
 
 As this shows, `"testScope.testCollection"` appears as the value for the `--scope-collection-exp` flag.
 When the *Import Document* button is now left-clicked on, the contents of the `json.list` file are imported into the collection *testCollection*, which is within the scope *testScope*; which itself is within the bucket *testBucket*.
 This can be validated by accessing the *Documents* panel, and specifying the appropriate keyspace:
 
-image::import-documents/documentsImportedIntoCollection.png[,720,align=left]
+image::import-documents/documentsImportedIntoCollection.png["The Document Workbench with imported data in the testCollection",720]
 
 [#importing-json-lines]
-== Importing JSON Lines
+== Import JSON Lines
 
 A _JSON Lines_ file is one that contains one or more JSON documents, each on a separate line.
 The following procedure demonstrates how to import such a file.
@@ -257,14 +262,14 @@ Each object contains two fields, which are `lastName` and `employeeNumber`.
 . Left-click on the *Select File to Import* button, and select the `lines.json` file.
 On selection, the *Parse File As* field displays *JSON Lines*, and the *File Contents* field displays the following:
 +
-image::import-documents/fileContentsWithJSONlinesParsedTable.png[,680,align=left]
+image::import-documents/fileContentsWithJSONlinesParsedTable.png["lines.json as a parsed table",680]
 
 . Select `testBucket` with _default_ scope and collection, as the destination *Keyspace*.
 
 . In the *Import With Document ID* panel, select the *Value of Field* option, and display the pulldown menu.
 This appears as follows:
 +
-image::import-documents/importWithEmployeeNumber.png[,440,align=left]
+image::import-documents/importWithEmployeeNumber.png["Value of Field, selecting employeeNumber",440]
 +
 Each `employeeNumber` field contains a unique value, and can therefore be used as the document id: therefore, select *employeeNumber*, as the value to be used.
 
@@ -274,18 +279,18 @@ When the *Import Complete* dialog confirms success, dismiss the dialog by left-c
 . Examine the imported documents, by accessing the *Workbench* tab.
 The documents appear as follows:
 +
-image::import-documents/importedDocumentsWithEmployeeNumberID.png[,720,align=left]
+image::import-documents/importedDocumentsWithEmployeeNumberID.png["The Document Workbench with lines.json imported",720]
 
 Thus, each document has been imported, with its `employeeNumber` value as the id of the document.
 
 [#importing-csv-and-tsv-files]
-== Importing CSV and TSV Files
+== Import CSV and TSV Files
 
 To import a CSV (_comma-separated values_) file, proceed as follows:
 
 . Save the following, as `employees.csv`:
 +
-[source,json]
+[source,csv]
 ----
 lname,empno
 smith,0003456
@@ -298,7 +303,7 @@ davis,0009324
 Select `testBucket` with _default_ scope and collection, as the destination *Keyspace*.
 The panel now appears as follows:
 +
-image::import-documents/importDocumentsWithCSVprepared.png[,720,align=left]
+image::import-documents/importDocumentsWithCSVprepared.png["Preview of employees.csv",720]
 
 . Under *Import With Document ID*, specify `empno` as *Value of Field*.
 
@@ -309,6 +314,7 @@ The documents are imported, with the value of `empno` is used as the id for each
 
 To import a TSV (_tab-separated values_) file, follow the same procedure, with a file named `employees.tsv`, containing the following:
 
+[source,tsv]
 ----
 lname     empno
 smith     0003456
@@ -318,28 +324,28 @@ davis	  0009324
 ----
 
 [#handling-errors]
-== Handling Errors
+== Handle Errors
 
 If the contents of a file selected for import are inconsistent, Couchbase Server displays an error notification.
 For example:
 
-* *JSON Parse Errors*.
+JSON Parse Errors::
 +
-image::import-documents/jsonParseErrors.png[,360,align=left]
+image::import-documents/jsonParseErrors.png["JSON Parse Errors dialog",360]
 +
 Displayed when the JSON within a file is incorrect.
 For example, the JSON of a particular document is flawed (possibly due to a missing or redundant comma, or a missing curly brace); or the JSON array with a _list_ file is missing a square bracket; or more than one document within a _lines_ file appears on the same line.
 
-* *Import Warning: No Records Found*
+Import Warning: No Records Found::
 +
-image::import-documents/importWarning.png[,360,align=left]
+image::import-documents/importWarning.png["Import Warning dialog: no records found",360]
 +
 Displayed when no records can be found within the specified file.
 This may be due to a file-naming error: for example, a JSON _list_ has been saved as a `*.lines` file.
 
-* *Import Warning: Data-Type Unrecognized*
+Import Warning: Data-Type Unrecognized::
 +
-image::import-documents/importWarning2.png[,360,align=left]
+image::import-documents/importWarning2.png["Import Warning dialog: data type unrecognized",360]
 +
 Displayed when Couchbase Server cannot identify the data within the file as being of any supported type.
 
diff --git a/modules/manage/pages/manage-documents/manage-documents.adoc b/modules/manage/pages/manage-documents/manage-documents.adoc
new file mode 100644
index 0000000000..9bc2bc7a22
--- /dev/null
+++ b/modules/manage/pages/manage-documents/manage-documents.adoc
@@ -0,0 +1,130 @@
+= Manage Documents in the Couchbase Web Console
+:imagesdir: ../../assets/images
+:page-topic-type: guide
+:page-pagination:
+:description: Couchbase Web Console provides a graphical interface that you can use to view and edit documents.
+:escape-hatch: cloud:clusters:data-service/manage-documents.adoc
+
+[abstract]
+{description}
+
+include::ROOT:partial$component-signpost.adoc[]
+
+== Access the Documents Workbench
+
+You can use the *Documents* workbench to view and manage documents in a database.
+
+image::manage-ui/documentsScreenWithDocuments.png["The Documents workbench with documents displayed",700]
+
+To display the *Documents* workbench, do one of the following:
+
+* In the Couchbase Web Console, choose menu:Documents[Workbench].
+
+* Alternatively, choose menu:Buckets[], then click the *Documents* link for any bucket, scope, or collection.
+
+[#retrieve-documents]
+== Retrieve Documents
+
+You can use the *Documents* workbench to retrieve and view the individual documents contained within a bucket, scope, or collection on the database.
+The retrieved documents are summarized in a table format, with a row for each retrieved document.
+Document retrieval controls at the top of the summary control the retrieval and display of documents.
+
+The *Documents* workbench has the following controls:
+
+* *Keyspace*: A set of three related drop-down menus:
+
+** A drop-down menu that displays the name of the bucket whose documents are shown.
+You can use the drop-down menu to select from a list of buckets in the current database.
+
+** A drop-down menu that displays the name of the scope whose documents are shown.
+The `_default` scope is the default.
+You can use the drop-down menu to select from a list of all the scopes within the current bucket.
+
+** A drop-down menu that displays the name of the collection whose documents are shown.
+The `_default` collection is the default.
+You can use the drop-down menu to select from a list of all the collections within the current scope.
+
+* *Limit*: The maximum number of rows (documents) to retrieve and display at once.
+
+* *Offset*: The number of documents in the entire set of the current bucket that is skipped, before display begins.
+
+* *Document ID*: Accepts the ID value of a specific document.
+Leave this field blank to retrieve documents based on *Limit* and *Offset*.
+
+* *show range*: Enables you to enter starting ID and ending ID values to specify a range of ID values.
+
+* *N1QL WHERE*: Accepts a {sqlpp} query -- specifically, a WHERE clause which determines the subset of documents to show.
+
+// The control really is called N1QL WHERE. Do not update.
+
+To retrieve a set of documents:
+
+. Use the *Keyspace* drop-down menus to specify the location of the documents you want to view.
+The set of retrieved documents is automatically updated based on your selections.
+
+. (Optional) Use *Limit*, *Offset*, *Document IT*, or *N1QL WHERE* to specify a set of documents, then click btn:[Retrieve Docs] to retrieve the documents based on your configuration.
+
+. (Optional) To view successive sets of documents, use the *next batch >* and *< prev batch* controls.
+
+== View and Edit Existing Documents
+
+To view and edit a document:
+
+. <> a set of documents.
+
+. Click on a document name, or click the pencil icon icon:pencil[] next to the document name.
+
+. (Optional) Click *Data* to view the document data.
+This comprises a series of key-value pairs in JSON format, which may be nested.
+You can make modifications to the document data.
++
+image::manage-ui/editDocumentData.png["The Edit Document dialog showing document data",300]
+
+
+. (Optional) Click *Metadata* to view the xref:learn:data/data.adoc#metadata[document metadata].
+It's not possible to edit a document's metadata.
+Couchbase Server generates metadata when the document is saved.
++
+image::manage-ui/editDocumentMetaData.png["The Edit Document dialog showing document metadata",300]
+
+. Click btn:[Save] to save your changes.
+
+== Edit Existing Documents in Spreadsheet View
+
+When spreadsheet view is enabled, you can edit an existing document directly in the results area.
+
+To edit existing documents in spreadsheet view:
+
+. <> the document to display it in the results area.
+. Click *enable field editing* so that the switch is enabled.
+. Edit any of the existing fields in the document.
+. Click the save icon icon:save[] next to the document name.
+
+== Copy Existing Documents
+
+To create a copy of an existing document:
+
+. <> the document to display it in the results area.
+. Click the copy icon icon:copy[] next to the document name.
+. Enter a document *ID* and edit the contents of the document.
+. Click btn:[Save] to create the document.
+
+== Create New Documents
+
+To create a new document:
+
+. Click btn:[Add Document].
+. Enter a document *ID* and edit the contents of the document.
+. Click btn:[Save] to create the document.
+
+== Delete Documents
+
+To delete a document:
+
+. <> the document to display it in the results area.
+. Click the trash icon icon:trash[] next to the document name.
+. Click btn:[Continue] to delete the document.
+
+== Related Links
+
+To import a set of JSON documents or a CSV file, see xref:manage:import-documents/import-documents.adoc[].
diff --git a/modules/manage/pages/manage-expiration.adoc b/modules/manage/pages/manage-expiration.adoc
index 5b6d7a84fe..3e4a36cca9 100644
--- a/modules/manage/pages/manage-expiration.adoc
+++ b/modules/manage/pages/manage-expiration.adoc
@@ -99,7 +99,7 @@ If you then query the document's expiration metadata, you'll find that it's now
 ]
 ----
 
-You can prevent Couchbase Server from clearing the expiration value by using the xref:settings:query-settings.adoc#preserve_expiry[`preserve_expiry`] request-level parameter. 
+You can prevent Couchbase Server from clearing the expiration value by using the xref:n1ql:n1ql-manage/query-settings.adoc#preserve_expiry[preserve_expiry] request-level parameter. 
 
 You can directly preserve a document's expiration when mutating it. 
 To preserve the expiration, set the expiration metadata to the document's current expiration value.  
diff --git a/modules/manage/pages/manage-indexes/manage-indexes.adoc b/modules/manage/pages/manage-indexes/manage-indexes.adoc
index cd592e72db..5519f8bb70 100644
--- a/modules/manage/pages/manage-indexes/manage-indexes.adoc
+++ b/modules/manage/pages/manage-indexes/manage-indexes.adoc
@@ -1,6 +1,7 @@
 = Manage Indexes
 :description: Indexes provided by the Index Service can be managed with Couchbase Web Console, with the CLI, and with the REST API.
 :imagesdir: ../../assets/images
+:escape-hatch: cloud:clusters:index-service/manage-indexes.adoc
 
 // Cross references
 :storage-modes: xref:learn:services-and-indexes/indexes/storage-modes.adoc
@@ -20,6 +21,8 @@
 [abstract]
 {description}
 
+include::ROOT:partial$component-signpost.adoc[]
+
 [#defining-editing-and-managing-indexes]
 == Defining, Editing, and Managing Indexes
 
@@ -40,20 +43,20 @@ It also shows how to use the console's *Query Editor*, provided on the *Query* s
 The user interface for index management is provided on the *Indexes* screen.
 Access this by left-clicking on the tab in the left-hand navigation bar:
 
-image::manage-ui/indexesTab.png[,100,align=left]
+image::manage-ui/indexesTab.png["The Indexes tab in the left-hand navigation bar",100]
 
 An index list, showing a summary of all currently-defined indexes, is displayed in table format.
 
-image::manage-ui/indexesScreenFullyPrepared.png[,700,align=left]
+image::manage-ui/indexesScreenFullyPrepared.png["The Indexes screen",700]
 
 The *Bucket & Scope* fields allow selection of a bucket from those defined on the cluster; and of a scope from those defined within the bucket.
 Left-click on the left-hand field, to display a pull-down menu of available buckets:
 
-image::manage-indexes/buckets-pulldown-menu.png[,400,align=left]
+image::manage-indexes/buckets-pulldown-menu.png["The buckets menu",400]
 
 Likewise, left-click on the right-hand field, to display a pull-down menu of scopes within the selected bucket:
 
-image::manage-indexes/scopes-pulldown-menu.png[,400,align=left]
+image::manage-indexes/scopes-pulldown-menu.png["The scopes menu",400]
 
 Each time a selection is made, the list of indexes in the lower panel is redisplayed; so as to show the indexes that are defined on data within the selected scope and bucket.
 
@@ -70,7 +73,7 @@ The name of the index.
 There may also be one or more indicators after the index name, giving further information:
 
 +
-image::manage-indexes/index-indicators.png[]
+image::manage-indexes/index-indicators.png["Index indicators: partitioned, replica 1, stale"]
 
 ** `partitioned` indicates that the index is _partitioned_.
 An overview of partitioning is provided in xref:learn:services-and-indexes/indexes/index-replication.adoc#index-partitioning[Index Partitioning].
@@ -104,7 +107,7 @@ The state can be expressed as *ready*, *pause*, *warmup*, or *n mutations remain
 The color of the left margin of the index row also reflects the current state of the index.
 For example, the color is green when the index is *ready*; and orange when the index is in *warmup*.
 +
-image::manage-indexes/index-margins.png[]
+image::manage-indexes/index-margins.png["Index left margins: green and orange"]
 
 [[expand-index]]
 == Index Administration
@@ -113,7 +116,7 @@ To administer an index, left-click on a specific index row in the indexes list,
 (Subsequently, whenever appropriate, left-click on the row again, to collapse it.)
 When the row is expanded, it appears as follows:
 
-image::manage-indexes/index-row-expanded.png[,700,align=left]
+image::manage-indexes/index-row-expanded.png["Index details",700]
 
 The following information is thus provided:
 
@@ -136,7 +139,7 @@ These controls are described below.
 To see statistics for the index, left-click on the *Index Stats* control in the expanded index row.
 The panel expands vertically, and provides the following display of interactive charts:
 
-image::manage-indexes/index-stats-display.png[,700,align=left]
+image::manage-indexes/index-stats-display.png["Index statistics",700]
 
 For more information on these charts, see {index-stats}[Index Statistics].
 
@@ -150,104 +153,12 @@ Proceed as follows:
 . From the *Indexes* screen, left-click the *Open in Workbench* button, in the expanded index row.
 The index definition is displayed in the Query Workbench:
 +
-image::manage-indexes/indexInQueryWorkbench.png[,700,align=left]
+image::manage-indexes/indexInQueryWorkbench.png["The Query Workbench with an index definition",700]
 
 . Modify the {sqlpp} index-definition, as required.
 (Note that you cannot change the definition of the existing index, but you can create a new index with the modified definition.)
 
-Immediately beneath the *Query Editor*, four buttons are displayed.
-These can be used to test queries, and to determine how to design corresponding indexes; so as to maximize query-performance.
-The buttons are as follows.
-
-==== Execute
-
-When left-clicked on, this executes the query that has been typed into the *Query Editor*.
-For example, type the following query into the *Query Editor*: `SELECT icao FROM `travel-sample` WHERE name = "SeaPort Airlines";`.
-This selects every `icao` key-value pair from the bucket `travel-sample`, where the host document also contains a `name` value that is `SeaPort Airlines`:
-
-image::manage-ui/queryEditorWithSelectQuery.png[,540,align=left]
-
-Left-click on the *Execute* button.
-
-image::manage-ui/leftClickOnExecuteButton.png[,110,align=left]
-
-Couchbase Web Console now provides feedback on the ongoing execution of the query, to the right of the buttons.
-When query-execution has concluded, the results are duly displayed:
-
-image::manage-indexes/resultsOfqueryExecution.png[,520,align=left]
-
-Note also that the default appearance of the *Query* screen includes, at the upper right, a button labeled *query context*:
-
-image::manage-indexes/queryContextButton.png[,120,align=left]
-
-Left-click on the control at the right-hand side of the button, to reveal its pulldown menu.
-This menu contains an entry for each bucket defined on the cluster:
-
-image::manage-indexes/bucketsButton.png[,120,align=left]
-
-Once a bucket has been selected, a further button (with pulldown-menu control) appears to the right, allowing selection of a scope within the selected bucket:
-
-image::manage-indexes/scopesButton.png[,240,align=left]
-
-Once a scope — for example, `inventory` — has been selected, queries can be entered into the *Query Editor* panel without explicit specification of bucket or scope being required: the bucket and scope for the query will be inferred from the pulldown-menu selections that have been made.
-For example, the following expression performs a query on the documents in the `airline` collection; which itself resides within `inventory`, within `travel-sample`:
-
-image::manage-indexes/queryEditorWithShorterSelectQuery.png[,540,align=left]
-
-Note that buckets and scopes other than those currently selected by means of the pulldown menus can still be explicitly specified within the *Query Editor*, if required.
-
-==== Explain
-
-When left-clicked on, this provides an explanation of how query-execution proceeded:
-
-image::manage-ui/leftClickOnExplainButton.png[,110,align=left]
-
-The explanation is now displayed in the *Query Results* panel:
-
-image::manage-ui/queryExplanation.png[,720,align=left]
-
-This indicates the bucket and primary index scan that have been used in the query; as well as the filter applied, and the number of terms returned.
-
-==== Index Advisor
-
-When left-clicked on, this displays advice as to what index or indexes might be created, in order to improve the future performance of the query:
-
-image::manage-indexes/leftClickOnAdviseButton.png[,110,align=left]
-
-Advice is duly displayed in the *Query Results* panel:
-
-image::manage-indexes/queryAdviceDisplay2.png[,440,align=left]
-
-In this instance, the advice consists of two options; which are, respectively, the creation of a _covering_ index, and the creation of a regular index.
-To create a covering index, left-click on the *Create and Build Covering Index* button:
-
-The following notification is now displayed:
-
-image::manage-ui/indexCreateWarning.png[,380,align=left]
-
-Left-click on *Continue*.
-When index-creation is completed, the following success-message appears on the *Query* screen:
-
-image::manage-ui/createIndexSuccessMessage.png[,620,align=left]
-
-==== Run as TX
-
-The *Run as TX* button allows the specified query to be run transactionally, across multiple indexes.
-For information on transactions, see xref:learn:data/transactions.adoc[Transactions].
-
-Left-click on the *Run as TX* button, and the query is run as a transaction.
-When the transaction is complete, status is displayed as follows:
-
-image::manage-indexes/transactionSuccessDisplay.png[,620,align=left]
-
-=== Index-Definition Support in Community Edition
-
-Note that in Couchbase Server _Community_ Edition, index-definition support is provided in a slightly different way.
-The area immediately below the *Query Editor* appears as follows:
-
-image::manage-ui/ceIndexAdvisorLink.png[,320,align=left]
-
-The https://index-advisor.couchbase.com/indexadvisor/#1[External Query Advisor^] link takes the user to an external web-site, where the *Query Advisor* can be accessed and used.
+For more information on using the Query Workbench, see xref:tools:query-workbench.adoc[].
 
 [[drop-index]]
 === Drop the Index
@@ -258,7 +169,7 @@ To drop the index from the bucket:
 +
 A pop-up message appears, asking if you are sure you want to drop the index.
 +
-image::manage-indexes/drop-index.png[,322,align=left]
+image::manage-indexes/drop-index.png["Drop Index dialog",322]
 
 . Left-click on the *Drop Index* button, to drop the index.
 Alternatively, left-click on the *Cancel* button, to cancel.
@@ -270,13 +181,15 @@ Note that you can also drop an index by means of the {sqlpp} {drop-index}[DROP I
 
 Summary statistics for the Index Service are displayed in the footer of the Indexes screen.
 
-image::manage-indexes/service-stats.png[,720,align=left]
+image::manage-indexes/service-stats.png["Index service statistics",720]
 
 For details of the index summary statistics, refer to {service-stats}[Index Service Statistics].
 
 == Index Storage Mode and Other Settings
 
-You can change the storage mode that all indexes use using the COuchbase Server Web Console's Settings page. This page also has other advanced options for indexes. See xref:manage:manage-settings/general-settings.adoc#index-storage-mode[Index Storage Mode]. 
+You can change the storage mode that all indexes use using the COuchbase Server Web Console's Settings page.
+This page also has other advanced options for indexes.
+See xref:manage:manage-settings/general-settings.adoc#index-storage-mode[Index Storage Mode].
 
 
 [[cli]]
diff --git a/modules/manage/pages/manage-security/manage-console-access.adoc b/modules/manage/pages/manage-security/manage-console-access.adoc
index 0a81e52c3d..31a2f9dbd2 100644
--- a/modules/manage/pages/manage-security/manage-console-access.adoc
+++ b/modules/manage/pages/manage-security/manage-console-access.adoc
@@ -1,6 +1,6 @@
 = Manage Console Access
 :description: Administrators can connect securely with Couchbase Web Console.
-:page-edition: enterprise edition
+:page-edition: Enterprise Edition
 
 [abstract]
 {description}
diff --git a/modules/manage/pages/manage-settings/general-settings.adoc b/modules/manage/pages/manage-settings/general-settings.adoc
index a3f0e8c9c1..2dee1a22ba 100644
--- a/modules/manage/pages/manage-settings/general-settings.adoc
+++ b/modules/manage/pages/manage-settings/general-settings.adoc
@@ -1,8 +1,10 @@
 = General
 :description: pass:q[_General_ settings allow configuration of _cluster name_, _memory quotas_, _storage modes_, and _node availability_ for the cluster; and of _advanced settings_ for the Index and Query Services.]
-:page-aliases: settings:cluster-settings,settings:change-failover-settings,manage:manage-settings/cluster-settings,manage:manage-settings/change-failover-settings,manage:manage-settings/update-notification
+:page-aliases: settings:cluster-settings, settings:change-failover-settings, manage:manage-settings/cluster-settings, manage:manage-settings/change-failover-settings, manage:manage-settings/update-notification, n1ql:n1ql-language-reference/backfill, settings:backfill
+:keywords: backfill
 :imagesdir: ../../assets/images
 :page-toclevels: 3
+
 [abstract]
 {description}
 
@@ -187,22 +189,37 @@ However, for best performance it is recommended to benchmark with different sett
 [#query-settings]
 === Query Settings
 
-Left-clicking on the *Advanced Query Settings* tab displays interactive fields whereby the Query Service can be configured.
+Left-clicking on *Advanced Query Settings* displays interactive fields with which you can configure the Query Service.
 The top section of the panel appears as follows:
 
 image::manage-settings/query-settings-top.png["The top half of the Query Settings panel",548,align=center]
 
-Specify either *Unrestricted* or *Restricted*, to determine which URLs are permitted to be accessed by the `curl` function.
-If *Unrestricted* (the default) is specified, all URLs can be accessed.
-If *Restricted* is specified, the UI expands, to display configurable fields into which the URLs allowed and disallowed can be entered.
+Under *CURL() Function Access*, specify either *Unrestricted* or *Restricted*, to determine which URLs the CURL() function can access.
+
+* If you specify *Unrestricted* (the default), the CURL() function can access all URLs.
+
+* If you specify *Restricted*, the UI expands, to display configurable fields into which you can enter the allowed and disallowed URLs.
 
-The *Query Temp Disk Path* field allows specification of the path to which temporary files are written, based on query activities.
-The maximum size of the target can be specified, in megabytes.
+(((backfill)))
+When a query has an extremely large corresponding index scan, the indexer buffers the results into a temporary directory.
+Since this method may cause high I/O and works differently on Windows, you can configure backfill settings for the {sqlpp} engine and its embedded GSI client.
+
+* The *Query Temp Disk Path* field enables you to specify the path to which the indexer writes temporary backfill files, to store any transient data during query processing.
+The specified path must already exist.
+Only absolute paths are allowed.
+The default path is `var/lib/couchbase/tmp` within the Couchbase Server installation directory.
+
+* The *Quota* field enables you to specify the maximum size of temporary backfill files, in megabytes.
+Setting the size to `0` disables backfill.
+Setting the size to `-1` means the size is unlimited.
+The maximum size is limited only by the available disk space.
 
 Additional Query settings are provided in the lower section of the panel:
 
 image::manage-settings/query-settings-bottom.png["The bottom half of the Query Settings panel",548,align=center]
 
+// NOTE: The N1QL Feature Controller still contains the word N1QL in the UI
+
 * *Pipeline Batch*: The number of items that can be batched for fetches from the Data Service.
 
 * *Pipeline Cap*: The maximum number of items that can be buffered in a fetch.
@@ -221,8 +238,6 @@ image::manage-settings/query-settings-bottom.png["The bottom half of the Query S
 
 * *Max Parallelism*: The maximum number of index partitions for parallel aggregation-computing.
 
-// NOTE: This option is still called the N1QL Feature Controller
-
 * *N1QL Feature Controller*: Enables or disables features in the Query engine.
 +
 WARNING: Do not change the *N1QL Feature Controller* setting without guidance from technical support.
@@ -251,7 +266,7 @@ Note that KV range scans cannot currently be started on a replica vBucket.
 If a query uses sequential scan and a data node becomes unavailable, the query might return an error, even if read from replica is enabled for the request.
 --
 
-For additional details on all the Query settings in the lower section of the panel, refer to xref:settings:query-settings.adoc[Settings and Parameters].
+For additional details on all the Query settings in the lower section of the panel, refer to xref:n1ql:n1ql-manage/query-settings.adoc[].
 
 [#index-storage-mode]
 === Index Storage Mode
@@ -496,7 +511,7 @@ To set cluster-level query settings, for example the log level and the maximum p
 --max-parallelism 4
 ----
 
-For additional details on the cluster-level query settings, refer to xref:settings:query-settings.adoc[Settings and Parameters].
+For additional details on the cluster-level query settings, refer to xref:n1ql:n1ql-manage/query-settings.adoc[Settings and Parameters].
 
 [#rebalance-settings-via-cli]
 === Rebalance Settings via CLI
@@ -745,14 +760,14 @@ Also see the REST API reference page, xref:rest-api:rest-reader-writer-thread-co
 [#query-settings-via-rest]
 === Query Settings via REST
 
-To set the directory for temporary query data, and establish its size-limit, use the `/settings/querySettings` method.
+To set the directory for temporary backfill data, and establish its size-limit, use the `/settings/querySettings` method.
 
 [source,shell]
 ----
 include::rest-api:example$query-settings-post-settings.sh[tag=request]
 ----
 
-This specifies that the directory for temporary query data should be `/tmp`; and that the maximum size should be 2048 megabytes.
+This specifies that the directory for temporary backfill data should be `/tmp`; and that the maximum size should be 2048 megabytes.
 
 If successful, this call returns a JSON document featuring all the current query-related settings, including access-control:
 
@@ -771,7 +786,7 @@ include::rest-api:example$query-settings-post-access.sh[tag=request]
 ----
 
 A JSON document is specified as the payload for the method.
-The document's values indicate that `https://company1.com` is allowed, and `https://company2.com` is disallowed.
+The document's values indicate that `+https://company1.com+` is allowed, and `+https://company2.com+` is disallowed.
 
 If successful, the call returns a JSON document that confirms the modified settings:
 
@@ -832,6 +847,7 @@ For more information on getting and setting the rebalance retry status, see xref
 
 To inspect the current maximum number of concurrent vBucket moves permitted for every node, use the `GET /settings/rebalance` HTTP method and URI, with the `rebalanceMovesPerNode` parameter, as follows:
 
+[source,shell]
 ----
 curl -v -X GET http://10.143.201.101:8091/settings/rebalance \
 -u Administrator:password
@@ -839,6 +855,7 @@ curl -v -X GET http://10.143.201.101:8091/settings/rebalance \
 
 This returns an object, confirming the current setting as being `4` (which is the default value):
 
+[source,json]
 ----
 {"rebalanceMovesPerNode":4}
 ----
@@ -846,6 +863,7 @@ This returns an object, confirming the current setting as being `4` (which is th
 To _set_ a new value for the parameter use the `POST` method with the same URI, and with the `rebalanceMovesPerNode` parameter.
 Note that the minimum value is `1`, and the maximum `64`.
 
+[source,shell]
 ----
 curl -v -X POST http://10.143.201.101:8091/settings/rebalance \
 -u Administrator:password \
@@ -854,6 +872,7 @@ curl -v -X POST http://10.143.201.101:8091/settings/rebalance \
 
 If successful, the call returns an object confirming the new setting:
 
+[source,json]
 ----
 {"rebalanceMovesPerNode":10}
 ----
diff --git a/modules/manage/pages/manage-settings/install-sample-buckets.adoc b/modules/manage/pages/manage-settings/install-sample-buckets.adoc
index df5fc4cd3d..4282e4d798 100644
--- a/modules/manage/pages/manage-settings/install-sample-buckets.adoc
+++ b/modules/manage/pages/manage-settings/install-sample-buckets.adoc
@@ -1,5 +1,5 @@
 = Sample Buckets
-:description: Sample buckets contain scopes, collections, and documents that are ready to be experimented with.
+:description: You can install buckets containing example scopes, collections, and documents that you can experiment with.
 :page-aliases: settings:install-sample-buckets
 
 [abstract]
@@ -8,45 +8,20 @@
 [#configuring-sample-buckets]
 == Sample Buckets
 
-Sample buckets contain data for experimental use.
-Sample buckets are referred to in code and command-line examples throughout Couchbase-Server documentation.
-
-Full and Cluster administrators can install sample buckets with xref:manage:manage-settings/install-sample-buckets.adoc#install-sample-buckets-with-the-ui[Couchbase Web Console] and the xref:manage:manage-settings/install-sample-buckets.adoc#install-sample-buckets-with-the-rest-api[REST API].
+Full and Cluster administrators can install sample buckets using the xref:manage:manage-settings/install-sample-buckets.adoc#install-sample-buckets-with-the-ui[Couchbase Server Web Console] and the xref:manage:manage-settings/install-sample-buckets.adoc#install-sample-buckets-with-the-rest-api[REST API].
 
 [#scopes-collection-and-sample-buckets]
 === Scopes, Collections, and Sample Buckets
 
-Couchbase Server Version 7.0 introduces xref:learn:data/scopes-and-collections.adoc[Scopes and Collections], which allow data within a bucket to be organized according to type.
-Buckets created and used on previous versions of the server, after upgrade to 7.x, initially have all their data within the _default_ collection, which is itself within the _default_ scope.
-From this point, data can be selectively migrated from the default collection into other, administrator-defined collections.
-
-Each sample bucket provided with 7.x contains its data _either_:
-
-* Entirely within the default scope and collection.
-These buckets are `beer-sample` and `gamesim-sample`.
-
-* Within multiple scopes and collections that have been pre-defined to exist in addition to the default scope and collection; _and_ within the default scope and collection also.
-This is the configuration provided for the `travel-sample` bucket.
-In total, _seven_ scopes exist within this bucket:
-
-** `_default`.
-This contains the `_default` collection; within which all documents reside.
-The `_default` collection therefore itself contains all documents that existed in pre-7.0 versions of the `travel-sample` bucket.
-
-** `inventory`.
-This also contains all documents that existed in pre-7.0 versions of the `travel-sample` buckets, but in a different configuration: here, the documents are distributed, according to type, across five collections; which are named `airline`, `airport`, `landmark`, `hotel`, and `route`.
+xref:learn:data/scopes-and-collections.adoc[Scopes and Collections] let you organize data within a bucket by type.
+The  `beer-sample` and `gamesim-sample` sample buckets store all of their data in the default scope.
+The `travel-sample` bucket contains data in six scopes in addition to the `_default` scope.
+These additional scopes define several collections. 
+The `inventory` scope has collections that organize travel data such as airlines and airports.
+The data within the `tenant_agent_00` through `tenant_agent_04` scopes let you experiment with multi-tenancy applications.
 
-** `tenant_agent_00` to `tenant_agent_04`.
-Each of these five scopes contains two collections; which are named `users` and `bookings`.
-
-Since all three sample buckets contain, in their default collection, all data they held in pre-7.0 versions of Couchbase Server, programs written to access this data in its original locations will be able to continue doing so with minimal adjustment.
-All three buckets can also be used for experiments with _migration_, whereby the data is selectively redistributed into administrator-created collections.
-See xref:manage:manage-xdcr/replicate-using-scopes-and-collections.adoc#migrate-data-to-a-collection-with-the-ui[Migrate Data to a Collection with the UI].
-
-The `travel-sample` bucket contains travel-related data already in migrated form, within the collections in the scope `inventory`.
-The bucket can thus be used for immediate experimentation with application-access to scopes and collections.
-
-The `travel-sample` bucket also contains data within the `tenant_agent` scopes, which is appropriate for experimentation with _multi-tenancy-based_ application access.
+NOTE: The `_default` scope of the `travel_sample` bucket duplicates all of the data stored in the `inventory` and `tenant_agent_00` through `tenant_agent_04` scopes.
+This duplication makes the bucket compatible with scripts and applications written for versions of Couchbase Server earlier than 7.0 that did not support scopes and collections.
 
 [#install-sample-buckets-with-the-ui]
 == Install Sample Buckets with the UI
@@ -56,9 +31,9 @@ The *Sample Buckets* screen now appears, as follows:
 
 image::manage-settings/settings-samples.png[,720,align=left]
 
-Note that if one or more sample buckets have already been loaded, they are listed under the *Installed Samples* section of the page.
+If one or more sample buckets are already loaded, they're listed under the *Installed Samples* section of the page.
 
-For information on assigning roles to users, so as to enable them to access sample buckets following installation, see xref:manage:manage-security/manage-users-and-roles.adoc[Manage Users and Roles].
+See xref:manage:manage-security/manage-users-and-roles.adoc[Manage Users and Roles] to learn how to assign roles to users to grant access to the sample buckets.
 
 To install, select one or more sample buckets from the displayed list, using the checkboxes provided.
 For example, select the `travel-sample` bucket:
@@ -69,8 +44,8 @@ If there is insufficient memory available for the specified installation, a noti
 
 image::manage-settings/insufficientRamWarning.png[,290,align=left]
 
-For information on configuring memory quotas, see the information on xref:manage:manage-settings/general-settings.adoc[General] settings.
-For information on managing (including deleting) buckets, see xref:manage:manage-buckets/bucket-management-overview.adoc[Manage Buckets].
+For information about configuring memory quotas, see xref:manage:manage-settings/general-settings.adoc[General] settings.
+For information about managing (including deleting) buckets, see xref:manage:manage-buckets/bucket-management-overview.adoc[Manage Buckets].
 
 If and when you have sufficient memory, click [.ui]*Load Sample Data*.
 
@@ -83,15 +58,37 @@ See xref:manage:manage-buckets/bucket-management-overview.adoc[Manage Buckets],
 [#install-sample-buckets-with-the-rest-api]
 == Install Sample Buckets with the REST API
 
-To install sample buckets with the REST API, use the `POST /sampleBuckets/install` HTTP method and URI, as follows:
+To install sample buckets with the REST API, use the `POST /sampleBuckets/install` HTTP method and URI.
+For example:
+
+[source,console]
+----
+include::rest-api:example$install-sample-bucket.sh[]
+----
+
+If successful, the call returns a JSON dictionary that lists the tasks Couchbase Server started to load the buckets:
+
+[source,json]
+----
+include::rest-api:example$sample-bucket-install-response.json[] 
+----
+
+You can monitor the status of these tasks using the `/pools/default/tasks` REST API endpoint. 
+Pass it the `taskId` value from the task list returned by the call to `sampleBuckets/install`:
+
+[source,console]
+----
+curl -s -u Administrator:password  -G http://localhost:8091/pools/default/tasks \
+     -d taskId=439b29de-0018-46ba-83c3-d3f58be68b12 | jq '.' 
+----
+
+The command returns the current status of the task:
 
+[source,json]
 ----
-curl -X POST -u Administrator:password \
-http://10.143.194.101:8091/sampleBuckets/install \
--d '["travel-sample", "beer-sample"]'
+include::rest-api:example$beer-sample-task-status.json[] 
 ----
 
-If successful, the call returns an empty list.
+For more information about using the REST API, including details of how to retrieve a list of available sample buckets, see xref:rest-api:rest-sample-buckets.adoc[].
+For information about deleting buckets (including sample buckets), see xref:rest-api:rest-bucket-delete.adoc[].
 
-For further information on using the REST API, including details of how to retrieve a list of currently available sample buckets, see xref:rest-api:rest-sample-buckets.adoc[Managing Sample Buckets].
-For information on _deleting_ buckets (including sample buckets), see xref:rest-api:rest-bucket-delete.adoc[Deleting Buckets].
diff --git a/modules/manage/pages/manage-ui/manage-ui.adoc b/modules/manage/pages/manage-ui/manage-ui.adoc
index 6fbb577b92..d0aa709d81 100644
--- a/modules/manage/pages/manage-ui/manage-ui.adoc
+++ b/modules/manage/pages/manage-ui/manage-ui.adoc
@@ -1,4 +1,5 @@
 = Couchbase Web Console
+:imagesdir: ../../assets/images
 :description: The features of Couchbase Server can be managed by means of Couchbase Web Console.
 // :page-aliases: c-sdk:ROOT:webui-cli-access.adoc,dotnet-sdk:ROOT:webui-cli-access.adoc,go-sdk:ROOT:webui-cli-access.adoc,java-sdk:ROOT:webui-cli-access.adoc,nodejs-sdk:ROOT:webui-cli-access.adoc,php-sdk:ROOT:webui-cli-access.adoc,python-sdk:ROOT:webui-cli-access.adoc,
 
@@ -420,10 +421,10 @@ This brings up the *Documents* screen:
 image::manage-ui/documentsScreen.png[,700,align=left]
 
 This screen displays the documents contained within installed buckets.
-The screen is currently blank, since no buckets have yet been installed.
-The *Location* control permits a bucket to be selected from those installed, and for a scope and a collection within the bucket to be selected.
+Initially, the screen is blank, since no buckets have yet been installed.
+The *Keyspace* control permits a bucket to be selected from those installed, and for a scope and a collection within the bucket to be selected.
 Other controls allow specific documents to be displayed, according to configured parameters.
-(For information on scopes and collections, see xref:learn:data/scopes-and-collections.adoc[Scopes and Collections]).
+(For information on scopes and collections, see xref:learn:data/scopes-and-collections.adoc[Scopes and Collections].)
 
 The easiest way to install a bucket containing data is described in xref:manage:manage-settings/install-sample-buckets.adoc[Install Sample Buckets].
 If the `travel-sample` is installed, the screen appears as follows:
@@ -434,22 +435,10 @@ image::manage-ui/documentsScreenWithDocuments.png[,700,align=left]
 The internal content of documents can now be displayed and edited.
 
 The *Documents* screen presents two separate panels, which are accessible from the horizontal navigation bar along the top.
-The *Workbench* panel is the default, currently displayed.
-A full description of this panel and its contents is provided in xref:getting-started:look-at-the-results.adoc[Explore the Server Configuration], which is part of the the _Getting Started_ sequence.
+The *Workbench* panel is displayed by default.
+A full description of this panel and its contents is provided in xref:manage:manage-documents/manage-documents.adoc[], which is part of the Developer documentation.
 For an explanation of the *Import* panel, see xref:manage:import-documents/import-documents.adoc[Import Documents].
 
-To edit a document, left-click on a document-id that appears in the *id* column of the *Workbench* panel.
-This brings up the *Edit Document* dialog, which features an interactive *Data* panel, whereby the document's contents can be edited:
-
-image::manage-ui/editDocumentData.png[,300,align=left]
-
-To examine the document's _metadata_, left-click on the *Metadata* button, at the upper right of the *Edit Document* dialog.
-This duly brings up the *Metadata* panel (which is _read only_).
-
-image::manage-ui/editDocumentMetaData.png[,300,align=left]
-
-For instructions on installing a _sample bucket_, which contains documents that are ready to be inspected and experimented with, see xref:manage:manage-settings/install-sample-buckets.adoc[Install Sample Buckets].
-
 [#learning-about-documents]
 === Learn about Documents
 
@@ -531,7 +520,7 @@ This brings up the *Full Text Search* screen:
 image::manage-ui/searchScreen.png[,700,align=left]
 
 The screen contains panels for Search _Indexes_ and _Aliases_.
-Both panels are currently blank, since nothing has yet been created.
+Initially, both panels are blank, since nothing has yet been created.
 
 Creation of both is explained in xref:fts:fts-searching-from-the-ui.adoc[Searching from the UI].
 
@@ -554,7 +543,7 @@ This brings up the *Analytics* screen:
 image::manage-ui/analyticsScreen.png[,700,align=left]
 
 The screen contains an *Analytics Query Editor*, and a panel for *Analytics Query Results*.
-Both panels are currently blank.
+Initially, both panels are blank.
 
 [#analytics-learn-and-manage]
 === Analytics: Learn and Manage
@@ -574,7 +563,7 @@ This brings up the *Eventing* screen:
 [#console-eventing-screen]
 image::manage-ui/eventingScreen.png[,700,align=left]
 
-The screen is currently blank, since no Eventing functions have yet been defined.
+Initially, the screen is blank, since no Eventing functions have yet been defined.
 
 [#eventing-learn-and-manage]
 === Eventing: Learn and Manage
@@ -592,7 +581,7 @@ This brings up the *Repositories* screen, of the Backup Service:
 
 image::manage-ui/backupScreen.png[,700,align=left]
 
-The screen is currently blank, since no Backup-Service repositories have yet been defined.
+Initially, the screen is blank, since no Backup-Service repositories have yet been defined.
 
 [#backup-learn-and-manage]
 === Backup: Learn and Manage
@@ -614,7 +603,7 @@ This brings up the *Views* screen:
 [#console-views-screen]
 image::manage-ui/viewsScreen.png[,700,align=left]
 
-The screen is currently blank, since no Views have yet been defined.
+Initially, the screen is blank, since no Views have yet been defined.
 
 [#views-define-and-manage]
 === Views: Define and Manage
diff --git a/modules/manage/pages/monitor/monitor-intro.adoc b/modules/manage/pages/monitor/monitor-intro.adoc
index 39b0f117b9..c84823ddcf 100644
--- a/modules/manage/pages/monitor/monitor-intro.adoc
+++ b/modules/manage/pages/monitor/monitor-intro.adoc
@@ -47,7 +47,7 @@ For a complete list of metrics, see the xref:metrics-reference:metrics-reference
 
 Statistics for the Index Service can be managed by means of Couchbase Web Console: this is described in xref:manage:monitor/monitoring-indexes.adoc[Monitor Indexes].
 
-The monitoring of statistics related to the Query Service is described in xref:manage:monitor/monitoring-n1ql-query.adoc[Monitor Queries].
+The monitoring of statistics related to the Query Service is described in xref:n1ql:n1ql-manage/monitoring-n1ql-query.adoc[].
 
 The progressive desynchronization of nodes whose clock have been previously synchronized can be monitored; as described in xref:manage:monitor/xdcr-monitor-timestamp-conflict-resolution.adoc[Monitor Clock Drift].
 
diff --git a/modules/manage/pages/monitor/monitoring-indexes.adoc b/modules/manage/pages/monitor/monitoring-indexes.adoc
index 47343376cd..a44697f562 100644
--- a/modules/manage/pages/monitor/monitoring-indexes.adoc
+++ b/modules/manage/pages/monitor/monitoring-indexes.adoc
@@ -1,6 +1,7 @@
 = Monitor Indexes
 :description: The Indexes screen in Couchbase Web Console enables you to see statistics for a specific primary index or global secondary index.
 :imagesdir: ../../assets/images
+:no-escape-hatch:
 
 // Cross references
 :manage-indexes: xref:manage:manage-indexes/manage-indexes.adoc
@@ -12,6 +13,8 @@
 {description}
 It also enables you to see resource usage for the Index Service across all nodes.
 
+include::ROOT:partial$component-signpost.adoc[]
+
 [[index-stats]]
 == Index Statistics
 
@@ -25,7 +28,7 @@ To display statistics for a specific index:
 
 A graphical display of statistics for the index is shown.
 
-image::manage-indexes/index-stats-display.png[,700,align=left]
+image::manage-indexes/index-stats-display.png["Index statistics",700]
 
 The displayed charts are as follows:
 
@@ -81,9 +84,9 @@ The number of items waiting to be written.
 
 To change the interval over which the statistics are displayed, open the drop-down list to the right of the *Index Stats* heading; and select an interval:
 
-image::manage-indexes/index-stats-interval.png[,180,align=left]
+image::manage-indexes/index-stats-interval.png["The Index Stats menu",180]
 
-The available intervals are *Minute*, *Hour*, *Day*, *Week*, and *Month*..
+The available intervals are *Minute*, *Hour*, *Day*, *Week*, and *Month*.
 
 [[service-stats]]
 == Index-Service Statistics
@@ -91,7 +94,7 @@ The available intervals are *Minute*, *Hour*, *Day*, *Week*, and *Month*..
 The footer of the Indexes screen displays general statistics for the Index Service.
 These show resource usage for the Index Service across all nodes.
 
-image::manage-indexes/service-stats.png[,700,align=left]
+image::manage-indexes/service-stats.png["Index Service statistics",700]
 
 Note that the footer is always displayed: it does not scroll out of view.
 
@@ -123,7 +126,7 @@ The total disk file size consumed by all indexes for the selected bucket.
 
 To display Index-Service statistics for a different bucket, open the drop-down list to the right of the Index-Service statistics:
 
-image::manage-indexes/bucket-list.png[,220,align=left]
+image::manage-indexes/bucket-list.png["Index Service statistics, selecting travel-sample",220]
 
 To filter the list of buckets, type a filter term in the text box: nly buckets whose name contains the filter term are listed.
 Then, select the required bucket from the list.
diff --git a/modules/manage/pages/monitor/monitoring-n1ql-query.adoc b/modules/manage/pages/monitor/monitoring-n1ql-query.adoc
deleted file mode 100644
index 4acb4da0cd..0000000000
--- a/modules/manage/pages/monitor/monitoring-n1ql-query.adoc
+++ /dev/null
@@ -1,2742 +0,0 @@
-= Monitor Queries
-:description: Monitoring and profiling {sqlpp} queries, query service engines, and corresponding system resources is very important for smoother operational performance and efficiency of the system.
-:page-aliases: monitoring:monitoring-n1ql-query
-
-[abstract]
-{description}
-In fact, often it is vital for diagnosing and troubleshooting issues such as query performance, resource bottlenecks, and overloading of various services.
-
-System keyspaces provide various monitoring details and statistics about individual queries and query service.
-When running on a cluster with multiple query nodes, stats about all queries on all query nodes are collected in these system keyspaces.
-
-For example, this can help identify:
-
-* The top 10 slow or fast queries running on a particular query engine or the cluster.
-* Resource usage statistics of the query service, or resources used for a particular query.
-* Details about the active, completed, and prepared queries.
-* Find long running queries that are running for more than 2 minutes.
-
-These system keyspaces are like virtual keyspaces that are transient in nature, and are not persisted to disk or permanent storage.
-Hence, the information in the keyspaces pertains to the current instantiation of the query service.
-You can access the keyspaces using any of the following:
-
-* {sqlpp} language (from the cbq shell or Query Workbench)
-* REST API
-* Monitoring SDK
-
-NOTE: All the power of the {sqlpp} query language can be applied on the keyspaces to obtain various insights.
-
-The following diagnostics are provided:
-
-[cols="1,3"]
-|===
-| System Catalogs
-a|
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-datastores[system:datastores]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-namespaces[system:namespaces]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-buckets[system:buckets]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-scopes[system:scopes]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-keyspaces[system:keyspaces]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-indexes[system:indexes]
-* xref:n1ql:n1ql-intro/sysinfo.adoc#querying-dual[system:dual]
-
-| Monitoring Catalogs
-a|
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-* <>
-
-| Security Catalogs
-a|
-* <>
-* <>
-* <>
-* <>
-
-| Other
-a|
-* <>
-
-NOTE: These are only available using REST APIs.
-
-|===
-
-== Authentication and Client Privileges
-
-Client applications must be authenticated with sufficient privileges to access system keyspaces.
-
-* Users must have the *Query System Catalog* role to access restricted system keyspaces.
-For more details about user roles, see xref:learn:security/authorization-overview.adoc[Authorization].
-
-* In addition, users must have permission to access a bucket, scope, or collection to be able to view that item in the system catalog.
-Similarly, users must have SELECT permission on the target of an index to be able to view that index in the system catalog.
-
-* The following system keyspaces are considered open, that is, all users can access them without any special privileges:
- ** `system:dual`
- ** `system:datastores`
- ** `system:namespaces`
-
-== Query Monitoring Settings
-
-The monitoring settings can be set for each query engine (using the REST API) or for each query statement (using the cbq command line tool).
-Both settings are actually set via REST endpoints: using the xref:n1ql:n1ql-rest-api/admin.adoc[Admin REST API] (`/admin/settings` endpoint) and the xref:n1ql:n1ql-rest-api/index.adoc[Query REST API] (`/query/service` endpoint).
-
-The cbq shell and Query Workbench use the Query REST API to set monitoring at the request level.
-The Query Workbench automatically enables the profiling and timings.
-It can be disabled using the [.ui]*Preferences* option.
-For more information refer to the xref:tools:query-workbench.adoc[Query Workbench] section.
-
-Use the following query parameters to enable, disable, and control the monitoring capabilities, and the level of monitoring and profiling details for each query or globally at a query engine level:
-
-* profile
-* controls
-
-For more details and examples, refer to the
-xref:settings:query-settings.adoc[Query Settings] section.
-
-=== Enable Settings for a Query Engine
-
-You can enable profile settings for each query engine.
-These examples use local host IP address and default port numbers.
-You need to provide correct credentials, IP address, and port details of your setup.
-
-. Get the current query settings:
-+
-[source,sh]
-----
-include::settings:example$save-node-level-settings.sh[tag=curl]
-----
-+
-[source,sh]
-----
-cat  ./query_settings.json
-----
-+
-[source,json]
-----
-include::settings:example$node-level-settings.jsonc[]
-----
-
-. Set current query settings profile:
- .. To set the query settings saved in a file [.path]_./query_settings.json_, enter the following query:
-+
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u user:pword \
-  -X POST \
-  -d@./query_settings.json
-----
-
- .. To explicitly specify the settings, enter the following query:
-+
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u user:pword \
-  -H 'Content-Type: application/json' \
-  -d '{"profile": "phases"}'
-----
-
-. Verify the settings are changed as specified:
-+
-[source,sh]
-----
-include::settings:example$node-level-settings.sh[tag=curl]
-----
-+
-[source,json]
-----
-{
-  // ...
-  "profile":"phases",
-  "request-size-cap": 67108864,
-  "scan-cap": 512,
-  "servicers": 4,
-  "timeout": 0,
-  "txtimeout": "0s",
-  "use-cbo": true
-}
-----
-
-=== Enable Settings per Session or per Query
-
-You can enable monitoring and profiling settings for each query statement.
-To set query settings using the cbq shell, use the `\SET` command:
-
-[source,sqlpp]
-----
-\set -profile "timings";
-\set;
-----
-
-[source,none]
-----
- Query Parameters :
- Parameter name : profile
- Value : ["timings"]
- ...
-----
-
-To set query settings using the REST API, specify the parameters in the request body:
-
-[source,sh]
-----
-curl http://localhost:8093/query/service -u user:pword \
-  -d 'profile=timings&statement=SELECT * FROM "world" AS hello'
-----
-
-[#monitor-profile-details]
-== Monitoring and Profiling Details
-
-Couchbase Server provides detailed query monitoring and profiling information.
-The profiling and finer query execution timing details can be obtained for any query.
-
-When a query executes a user-defined function, profiling information is available for the {sqlpp} queries within the user-defined function as well.
-
-[[profile]]
-=== Attribute Profile in Query Response
-
-When profiling is enabled, a query response includes the profile attribute.
-The attribute details are as follows:
-
-.Attribute Details
-[cols="14,8"]
-|===
-| Attribute | Example
-
-a|
-`phaseTimes` -- Cumulative execution times for various phases involved in the query execution, such as authorize, indexscan, fetch, parse, plan, run, etc.
-
-[NOTE]
-====
-This value will be dynamic, depending on the documents processed by various phases up to this moment in time.
-
-A new query on `system:active_requests` will return different values.
-====
-a|
-[source,json]
-----
-"phaseTimes": {
-  "authorize": "823.631µs",
-  "fetch": "656.873µs",
-  "indexScan": "29.146543ms",
-  "instantiate": "236.221µs",
-  "parse": "826.382µs",
-  "plan": "11.831101ms",
-  "run": "16.892181ms"
-}
-----
-
-a|
-`phaseCounts` -- Count of documents processed at selective phases involved in the query execution, such as authorize, indexscan, fetch, parse, plan, run, etc.
-
-[NOTE]
-====
-This value will be dynamic, depending on the documents processed by various phases up to this moment in time.
-
-A new query on `system:active_requests` will return different values.
-====
-a|
-[source,json]
-----
-"phaseCounts": {
-  "fetch": 16,
-  "indexScan": 187
-}
-----
-
-a|
-`phaseOperators` -- Indicates the number of each kind of query operators involved in different phases of the query processing.
-For instance, this example, one non covering index path was taken, which involves 1 indexScan and 1 fetch operator.
-
-A join would have probably involved 2 fetches (1 per keyspace)
-
-A union select would have twice as many operator counts (1 per each branch of the union).
-
-This is in essence the count of all the operators in the `executionTimings` field.
-a|
-[source,json]
-----
-"phaseOperators": {
-  "authorize": 1,
-  "fetch": 1,
-  "indexScan": 2
-}
-----
-
-a|
-`executionTimings` -- The execution details such as kernel and service execution times, number of documents processed at each query operator in each phase, and number of phase switches, for various phases involved in the query execution.
-
-The following statistics are collected for each operator:
-
-`#operator`::
-Name of the operator.
-
-`#stats`::
-These values will be dynamic, depending on the documents processed by various phases up to this moment in time.
-+
-A new query on `system:active_requests` will return different values.
-
-`#itemsIn`;;
-Number of input documents to the operator.
-
-`#itemsOut`;;
-Number of output documents after the operator processing.
-
-`#phaseSwitches`;;
-Number of switches between executing, waiting for services, or waiting for the `goroutine` scheduler.
-
-`execTime`;;
-Time spent executing the operator code inside {sqlpp} query engine.
-
-`kernTime`;;
-Time spent waiting to be scheduled for CPU time.
-
-`servTime`;;
-Time spent waiting for another service, such as index or data.
-+
-* For index scan, it is time spent waiting for GSI/indexer.
-* For fetch, it is time spent waiting on the KV store.
-
-a|
-[source,json]
-----
-"executionTimings": {
-  // …
-  [{
-    "#operator": "Fetch",
-    "#stats": {
-      "#itemsIn": 187,
-      "#itemsOut": 16,
-      "#phaseSwitches": 413,
-      "execTime": "128.434µs",
-      "kernTime": "15.027879ms",
-      "servTime": "1.590934ms",
-      "state": "services"
-    },
-    "keyspace": "travel-sample",
-    "namespace": "default"
-  },
-  {
-    "#operator": "IntersectScan",
-    "#stats": {
-    "#itemsIn": 187,
-    "#itemsOut": 187,
-    "#phaseSwitches": 749,
-    "execTime": "449.944µs",
-    "kernTime": "14.625524ms"
-    }
-  // …
-  ]}
-----
-
-|===
-
-These statistics (`kernTime`, `servTime`, and `execTime`) can be very helpful in troubleshooting query performance issues, such as:
-
-* A high `servTime` for a low number of items processed is an indication that the indexer or KV store is stressed.
-* A high `kernTime` means there is a downstream issue in the query plan or the query server having many requests to process (so the scheduled waiting time will be more for CPU time).
-
-.Phases Profile
-====
-The cbq engine must be started with authorization, for example:
-
-[source,sh]
-----
-./cbq -engine=http://localhost:8091/ -u Administrator -p pword
-----
-
-Using the cbq shell, show the statistics collected when the `profile` is set to `phases`:
-
-[source,sqlpp]
-----
-\set -profile "phases";
-SELECT * FROM `travel-sample`.inventory.airline LIMIT 1;
-----
-
-[source,json]
-----
-{
-  "requestID": "06d6c1c2-1a8a-4989-a856-7314f9eddee5",
-  "signature": {
-    "*": "*"
-  },
-  "results": [
-    {
-      "airline": {
-        "callsign": "MILE-AIR",
-        "country": "United States",
-        "iata": "Q5",
-        "icao": "MLA",
-        "id": 10,
-        "name": "40-Mile Air",
-        "type": "airline"
-      }
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "12.77927ms",
-    "executionTime": "12.570648ms",
-    "resultCount": 1,
-    "resultSize": 254,
-    "serviceLoad": 12
-  },
-  "profile": {
-    "phaseTimes": {
-      "authorize": "19.629µs",
-      "fetch": "401.997µs",
-      "instantiate": "147.686µs",
-      "parse": "4.545234ms",
-      "plan": "409.364µs",
-      "primaryScan": "6.103775ms",
-      "run": "6.699056ms"
-    },
-    "phaseCounts": {
-      "fetch": 1,
-      "primaryScan": 1
-    },
-    "phaseOperators": {
-      "authorize": 1,
-      "fetch": 1,
-      "primaryScan": 1
-    },
-    "requestTime": "2021-04-30T18:37:56.394Z",
-    "servicingHost": "127.0.0.1:8091"
-  }
-}
-----
-====
-
-.Timings Profile
-====
-Using the cbq shell, show the statistics collected when `profile` is set to `timings`:
-
-[source,sqlpp]
-----
-\set -profile "timings";
-SELECT * FROM `travel-sample`.inventory.airline LIMIT 1;
-----
-
-[source,json]
-----
-{
-  "requestID": "268a1240-6864-43a2-af13-ccb8d1e50abf",
-  "signature": {
-    "*": "*"
-  },
-  "results": [
-    {
-      "airline": {
-        "callsign": "MILE-AIR",
-        "country": "United States",
-        "iata": "Q5",
-        "icao": "MLA",
-        "id": 10,
-        "name": "40-Mile Air",
-        "type": "airline"
-      }
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "2.915245ms",
-    "executionTime": "2.755355ms",
-    "resultCount": 1,
-    "resultSize": 254,
-    "serviceLoad": 12
-  },
-  "profile": {
-    "phaseTimes": {
-      "authorize": "18.096µs",
-      "fetch": "388.122µs",
-      "instantiate": "31.702µs",
-      "parse": "646.157µs",
-      "plan": "120.427µs",
-      "primaryScan": "1.402918ms",
-      "run": "1.936852ms"
-    },
-    "phaseCounts": {
-      "fetch": 1,
-      "primaryScan": 1
-    },
-    "phaseOperators": {
-      "authorize": 1,
-      "fetch": 1,
-      "primaryScan": 1
-    },
-    "requestTime": "2021-04-30T18:40:13.239Z",
-    "servicingHost": "127.0.0.1:8091",
-    "executionTimings": {
-      "#operator": "Authorize",
-      "#stats": {
-        "#phaseSwitches": 4,
-        "execTime": "1.084µs",
-        "servTime": "17.012µs"
-      },
-      "privileges": {
-        "List": [
-          {
-            "Target": "default:travel-sample.inventory.airline",
-            "Priv": 7,
-            "Props": 0
-          }
-        ]
-      },
-      "~child": {
-        "#operator": "Sequence",
-        "#stats": {
-          "#phaseSwitches": 1,
-          "execTime": "2.474µs"
-        },
-        "~children": [
-          {
-            "#operator": "PrimaryScan3",
-            "#stats": {
-              "#itemsOut": 1,
-              "#phaseSwitches": 7,
-              "execTime": "18.584µs",
-              "kernTime": "8.869µs",
-              "servTime": "1.384334ms"
-            },
-            "bucket": "travel-sample",
-            "index": "def_inventory_airline_primary",
-            "index_projection": {
-              "primary_key": true
-            },
-            "keyspace": "airline",
-            "limit": "1",
-            "namespace": "default",
-            "scope": "inventory",
-            "using": "gsi"
-          },
-          {
-            "#operator": "Fetch",
-            "#stats": {
-              "#itemsIn": 1,
-              "#itemsOut": 1,
-              "#phaseSwitches": 10,
-              "execTime": "25.64µs",
-              "kernTime": "1.427752ms",
-              "servTime": "362.482µs"
-            },
-            "bucket": "travel-sample",
-            "keyspace": "airline",
-            "namespace": "default",
-            "scope": "inventory"
-          },
-          {
-            "#operator": "InitialProject",
-            "#stats": {
-              "#itemsIn": 1,
-              "#itemsOut": 1,
-              "#phaseSwitches": 9,
-              "execTime": "6.006µs",
-              "kernTime": "1.825917ms"
-            },
-            "result_terms": [
-              {
-                "expr": "self",
-                "star": true
-              }
-            ]
-          },
-          {
-            "#operator": "Limit",
-            "#stats": {
-              "#itemsIn": 1,
-              "#itemsOut": 1,
-              "#phaseSwitches": 4,
-              "execTime": "2.409µs",
-              "kernTime": "2.094µs"
-            },
-            "expr": "1"
-          },
-          {
-            "#operator": "Stream",
-            "#stats": {
-              "#itemsIn": 1,
-              "#itemsOut": 1,
-              "#phaseSwitches": 6,
-              "execTime": "46.964µs",
-              "kernTime": "1.844828ms"
-            }
-          }
-        ]
-      },
-      "~versions": [
-        "7.0.0-N1QL",
-        "7.0.0-4960-enterprise"
-      ]
-    }
-  }
-}
-----
-====
-
-[[plan]]
-=== Attribute Meta in System Keyspaces
-
-The `meta().plan` virtual attribute captures the whole query plan, and monitoring stats of various phases and involved query operators.
-The `meta().plan` must be explicitly called in the SELECT query projection list.
-
-The `meta().plan` attribute is enabled only for individual requests that are running (`active_requests`) or completed (`completed_requests`) when the profile is set to timings (`profile ="timings"`) for each individual request.
-If at the engine level, the profile is set to off and individual requests have been run with `profile ="timings"`, then the system keyspaces will return the plan only for those requests.
-
-Since there may be a combination of profile settings for all of the requests reported by the system keyspaces, not all requests returned will have a `meta().plan` attachment.
-
-NOTE: For the `system:prepareds` requests, the `meta().plan` is available at all times since the `PREPARE` statement is not dependant on the profile setting.
-
-This attribute is enabled for the following system keyspaces:
-
-* <>
-* <>
-* <>
-
-For a detailed example, see <>.
-
-== Monitor Query Clusters
-
-Couchbase Server allows you to monitor many aspects of an active cluster: cluster-aware operations, diagnostics, and more system keyspaces features that cover multiple nodes.
-Functionalities include:
-
-* Ability to access active / completed / prepared requests across all Query nodes from {sqlpp}.
-* Ability to list nodes by type and with status from {sqlpp}.
-* Ability to list system keyspaces from `system:keyspaces`.
-* Extra fields in `system:active_requests` and `system:completed_requests`.
-* Counters to keep track of specific requests, such as cancelled requests.
-* Killing request for CREATE INDEX.
-
-=== System Keyspaces
-
-* The `system:keyspaces` keyspace can be augmented to list system keyspaces with a static map.
-The small disadvantage of this is that it has to be maintained as new system keyspaces are added.
-* The `system:active_requests` and `system:completed_requests` keyspaces can report scan consistency.
-* The `system:prepareds` keyspace can list min and max execution and service times, as well as average times.
-
-=== cbq-engine-cbauth
-
-`cbq-engine-cbauth` is an internal user that the query service uses to allow Query Workbench clients to query across multiple query nodes.
-
-Since Query Workbench can connect to the same node only when cbq-engine is running, Query Workbench cannot do any query-clustered operations.
-
-To get around this block, once the Query Workbench clients connect to a query node, this internal user (cbq-engine-cbauth) will be used to do any further inter-node user verification.
-
-[#vitals]
-== System Vitals
-
-The [.cmd]`Vitals` API provides data about the running state and health of the query engine, such as number of logical cores, active threads, queued threads, CPU utilization, memory usage, network utilization, garbage collection percentage, and so on.
-This information can be very useful to assess the current workload and performance characteristics of a query engine, and hence load-balance the requests being sent to various query engines.
-
-For field names and meanings, refer to xref:n1ql:n1ql-rest-api/admin.adoc#_vitals[Vitals].
-
-=== Get System Vitals
-
-[source,sh]
-----
-curl -u Administrator:pword http://localhost:8093/admin/vitals
-----
-
-[source,json]
-----
-{
-  "uptime": "7h39m32.668577197s",
-  "local.time": "2021-04-30 18:42:39.517208807 +0000 UTC m=+27573.945319668",
-  "version": "7.0.0-N1QL",
-  "total.threads": 191,
-  "cores": 2,
-  "gc.num": 669810600,
-  "gc.pause.time": "57.586373ms",
-  "gc.pause.percent": 0,
-  "memory.usage": 247985184,
-  "memory.total": 11132383704,
-  "memory.system": 495554808,
-  "cpu.user.percent": 0,
-  "cpu.sys.percent": 0,
-  "request.completed.count": 140,
-  "request.active.count": 0,
-  "request.per.sec.1min": 0.0018,
-  "request.per.sec.5min": 0.0055,
-  "request.per.sec.15min": 0.0033,
-  "request_time.mean": "536.348163ms",
-  "request_time.median": "54.065567ms",
-  "request_time.80percentile": "981.869933ms",
-  "request_time.95percentile": "2.543128455s",
-  "request_time.99percentile": "4.627922799s",
-  "request.prepared.percent": 0
-}
-----
-
-[#sys-active-req]
-== Monitor and Manage Active Requests
-
-The `system:active_requests` catalog lists all currently executing active requests or queries.
-
-For field names and meanings, refer to xref:n1ql:n1ql-rest-api/admin.adoc#_requests[Requests].
-The profile related attributes are described in the section <>
-
-[[sys-active-get]]
-=== Get Active Requests
-
-To view active requests with Admin REST API:
-
-[source,sh]
-----
-curl -u Administrator:pword http://localhost:8093/admin/active_requests
-----
-
-To view active requests with {sqlpp}, including the query plan:
-
-[source,sqlpp]
-----
-SELECT *, meta().plan FROM system:active_requests;
-----
-
-[[sys-active-delete]]
-=== Terminate an Active Request
-
-The DELETE command can be used to terminate an active request, for instance, a non-responding or a long-running query.
-
-To terminate an active request [.var]`uuid` with the Admin REST API:
-
-[source,sh]
-----
-curl -u Administrator:pword -X DELETE http://localhost:8093/admin/active_requests/uuid
-----
-
-To terminate an active request [.var]`uuid` with {sqlpp}:
-
-[source,sqlpp]
-----
-DELETE FROM system:active_requests WHERE requestId = "uuid";
-----
-
-[[sys-active-examples]]
-=== Examples
-
-.Get Active
-====
-[source,sqlpp]
-----
-SELECT *, meta().plan FROM system:active_requests;
-----
-
-[source,json]
-----
-[
-  {
-    "active_requests": {
-      "clientContextID": "81984707-a9cd-4b78-9110-d130eb580d7f",
-      "elapsedTime": "65.543946ms",
-      "executionTime": "65.507111ms",
-      "node": "127.0.0.1:8091",
-      "phaseCounts": {
-        "primaryScan": 1
-      },
-      "phaseOperators": {
-        "authorize": 1,
-        "fetch": 1,
-        "primaryScan": 1
-      },
-      "phaseTimes": {
-        "authorize": "2.361862ms",
-        "fetch": "7.222µs",
-        "instantiate": "13.233µs",
-        "parse": "660.048µs",
-        "plan": "52.877µs",
-        "primaryScan": "41.125271ms"
-      },
-      "remoteAddr": "127.0.0.1:57065",
-      "requestId": "27e73286-c6cc-4c26-8977-ef8d68e91c8f",
-      "requestTime": "2019-05-06 09:08:42.431161361 -0700 PDT m=+6976.301141271",
-      "scanConsistency": "unbounded",
-      "state": "running",
-      "statement": "SELECT *, meta().plan FROM system:active_requests;",
-      "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:66.0) Gecko/20100101 Firefox/66.0 (Couchbase Query Workbench (6.5.0-2859-enterprise))",
-      "users": "Administrator"
-    },
-    "plan": { // <1>
-      "#operator": "Sequence",
-      "#stats": {
-        "#phaseSwitches": 1,
-        "execTime": "1.661µs"
-      },
-      "~children": [
-        {
-          "#operator": "Authorize",
-          "#stats": {
-            "#phaseSwitches": 3,
-            "execTime": "4.844µs",
-            "servTime": "2.357018ms"
-          },
-          // ...
-        }
-      ]
-    },
-    "plan": {
-      "#operator": "Sequence",
-      "#stats": {
-        "#phaseSwitches": 1,
-        "execTime": "1.661µs"
-      },
-      "~children": [
-        {
-          "#operator": "Authorize",
-          "#stats": {
-            "#phaseSwitches": 3,
-            "execTime": "4.844µs",
-            "servTime": "2.357018ms"
-          },
-          "privileges": {
-            "List": [
-              {
-                "Priv": 4,
-                "Target": "#system:active_requests"
-              }
-            ]
-          },
-          "~child": {
-            "#operator": "Sequence",
-            "#stats": {
-              "#phaseSwitches": 1,
-              "execTime": "2.71µs"
-            },
-            "~children": [
-              {
-                "#operator": "PrimaryScan",
-                "#stats": {
-                  "#itemsOut": 1,
-                  "#phaseSwitches": 7,
-                  "execTime": "22.314µs",
-                  "kernTime": "2.13µs",
-                  "servTime": "41.102957ms"
-                },
-                "index": "#primary",
-                "keyspace": "active_requests",
-                "namespace": "#system",
-                "using": "system"
-              },
-              {
-                "#operator": "Fetch",
-                "#stats": {
-                  "#itemsIn": 1,
-                  "#phaseSwitches": 5,
-                  "execTime": "7.222µs",
-                  "kernTime": "41.130913ms",
-                  "servTime": "22.888285ms",
-                  "state": "services"
-                },
-                "keyspace": "active_requests",
-                "namespace": "#system"
-              },
-              {
-                "#operator": "Sequence",
-                "#stats": {
-                  "#phaseSwitches": 1,
-                  "execTime": "1.544µs"
-                },
-                "~children": [
-                  {
-                    "#operator": "InitialProject",
-                    "#stats": {
-                      "#phaseSwitches": 1,
-                      "execTime": "980ns",
-                      "kernTime": "64.066618ms",
-                      "state": "kernel"
-                    },
-                    "result_terms": [
-                      {
-                        "expr": "self",
-                        "star": true
-                      },
-                      {
-                        "expr": "(meta(`active_requests`).`plan`)"
-                      }
-                    ]
-                  },
-                  {
-                    "#operator": "FinalProject",
-                    "#stats": {
-                      "#phaseSwitches": 1,
-                      "execTime": "827ns"
-                    }
-                  }
-                ]
-              }
-            ]
-          }
-        },
-        {
-          "#operator": "Stream",
-          "#stats": {
-            "#phaseSwitches": 1,
-            "execTime": "1.29µs",
-            "kernTime": "66.511806ms",
-            "state": "kernel"
-          }
-        }
-      ],
-      "~versions": [
-        "2.0.0-N1QL",
-        "6.5.0-2859-enterprise"
-      ]
-    }
-  }
-]
-----
-
-<1> The *plan* section contains a tree of operators that combine to execute the {sqlpp} query.
-The root operator is a Sequence, which itself has a collection of child operators like Authorize, PrimaryScan, Fetch, and possibly even more Sequences.
-====
-
-[#sys-prepared]
-== Monitor and Manage Prepared Statements
-
-The `system:prepareds` catalog provides data about the known prepared statements and their state in a query engine's prepared statement cache.
-For each prepared statement, this catalog provides information such as name, statement, query plan, last use time, number of uses, and so on.
-
-For field names and meanings, refer to xref:n1ql:n1ql-rest-api/admin.adoc#_statements[Statements].
-The `system:prepareds` catalog returns all the properties that the {sqlpp} Admin REST API would return for a specific prepared statement.
-In addition, the `system:prepareds` catalog also returns the following properties.
-
-[options="header", cols=".^3a,.^11a,.^4a"]
-|===
-|Name|Description|Schema
-|**node** +
-__required__
-|The node on which the prepared statement is stored.
-|string
-
-|**namespace** +
-__required__
-|The namespace in which the prepared statement is stored.
-Currently, only the `default` namespace is available.
-|string
-|===
-
-A prepared statement is created and stored relative to the current xref:n1ql:n1ql-intro/queriesandresults.adoc#query-context[query context].
-You can create multiple prepared statements with the same name, each stored relative to a different query context.
-This enables you to run multiple instances of the same application against different datasets.
-
-When there are multiple prepared statements with the same name in different query contexts, the name of the prepared statement in the `system:prepareds` catalog includes the associated query context in brackets.
-
-[[sys-prepared-get]]
-=== Get Prepared Statements
-
-To get a list of all known prepared statements, you can use the Admin REST API or a {sqlpp} query:
-
-[source,sh]
-----
-curl -u Administrator:pword http://localhost:8093/admin/prepareds
-----
-
-[source,sqlpp]
-----
-SELECT * FROM system:prepareds;
-----
-
-To get information about a specific prepared statement [.var]`example1`, you can use the Admin REST API or a {sqlpp} query:
-
-[source,sh]
-----
-curl -u Administrator:pword http://localhost:8093/admin/prepareds/example1
-----
-
-[source,sqlpp]
-----
-SELECT * FROM system:prepareds WHERE name = "example1";
-----
-
-[[sys-prepared-delete]]
-=== Delete Prepared Statements
-
-To delete a specific prepared statement [.var]`p1`, you can use the Admin REST API or a {sqlpp} query:
-
-[source,sh]
-----
-curl -u Administrator:pword -X DELETE http://localhost:8093/admin/prepareds/p1
-----
-
-[source,sqlpp]
-----
-DELETE FROM system:prepareds WHERE name = "p1";
-----
-
-To delete all the known prepared statements, you must use a {sqlpp} query:
-
-[source,sqlpp]
-----
-DELETE FROM system:prepareds;
-----
-
-[[sys-prepared-examples]]
-=== Examples
-
-.Get Prepared
-====
-.Prepared statement with default query context -- using cbq
-[source,sqlpp]
-----
-\UNSET -query_context;
-PREPARE p1 AS SELECT * FROM `travel-sample`.inventory.airline WHERE iata = "U2";
-----
-
-[source,json]
-----
-{
-  "requestID": "64069886-eb17-4fa6-8cc8-e60ebe93d97c",
-  "signature": "json",
-  "results": [
-    {
-      "encoded_plan": "H4sIAAAAAAAA/wEAAP//AAAAAAAAAAA=",
-      "featureControls": 76,
-      "indexApiVersion": 4,
-      "indexScanKeyspaces": {
-        "default:travel-sample.inventory.airline": false
-      },
-      "name": "[127.0.0.1:8091]p1",
-      "namespace": "default",
-      "operator": {
-        "#operator": "Authorize",
-        "privileges": {
-          "List": [
-            {
-              "Priv": 7,
-              "Props": 0,
-              "Target": "default:travel-sample.inventory.airline"
-            }
-          ]
-        },
-        "~child": {
-          "#operator": "Sequence",
-          "~children": [
-            {
-              "#operator": "Sequence",
-              "~children": [
-                {
-                  "#operator": "PrimaryScan3",
-                  "bucket": "travel-sample",
-                  "index": "def_inventory_airline_primary",
-                  "index_projection": {
-                    "primary_key": true
-                  },
-                  "keyspace": "airline",
-                  "namespace": "default",
-                  "scope": "inventory",
-                  "using": "gsi"
-                },
-                {
-                  "#operator": "Fetch",
-                  "bucket": "travel-sample",
-                  "keyspace": "airline",
-                  "namespace": "default",
-                  "scope": "inventory"
-                },
-                {
-                  "#operator": "Parallel",
-                  "~child": {
-                    "#operator": "Sequence",
-                    "~children": [
-                      {
-                        "#operator": "Filter",
-                        "condition": "((`airline`.`iata`) = \"U2\")"
-                      },
-                      {
-                        "#operator": "InitialProject",
-                        "result_terms": [
-                          {
-                            "expr": "self",
-                            "star": true
-                          }
-                        ]
-                      }
-                    ]
-                  }
-                }
-              ]
-            },
-            {
-              "#operator": "Stream"
-            }
-          ]
-        }
-      },
-      "queryContext": "",
-      "reqType": "SELECT",
-      "signature": {
-        "*": "*"
-      },
-      "text": "PREPARE p1 AS SELECT * FROM `travel-sample`.inventory.airline WHERE iata = \"U2\";",
-      "useCBO": true
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "105.814848ms",
-    "executionTime": "105.648798ms",
-    "resultCount": 1,
-    "resultSize": 3301,
-    "serviceLoad": 12
-  }
-}
-----
-
-.Prepared statement with specified query context -- using cbq
-[source,sqlpp]
-----
-\SET -query_context travel-sample.inventory;
-PREPARE p1 AS SELECT * FROM airline WHERE iata = "U2";
-----
-
-[source,json]
-----
-{
-  "requestID": "1c90603e-5e42-42b4-9362-4fc96bf895ac",
-  "signature": "json",
-  "results": [
-    {
-      "encoded_plan": "H4sIAAAAAAAA/wEAAP//AAAAAAAAAAA=",
-      "featureControls": 76,
-      "indexApiVersion": 4,
-      "indexScanKeyspaces": {
-        "default:travel-sample.inventory.airline": false
-      },
-      "name": "[127.0.0.1:8091]p1",
-      "namespace": "default",
-      "operator": {
-        "#operator": "Authorize",
-        "privileges": {
-          "List": [
-            {
-              "Priv": 7,
-              "Props": 0,
-              "Target": "default:travel-sample.inventory.airline"
-            }
-          ]
-        },
-        "~child": {
-          "#operator": "Sequence",
-          "~children": [
-            {
-              "#operator": "Sequence",
-              "~children": [
-                {
-                  "#operator": "PrimaryScan3",
-                  "bucket": "travel-sample",
-                  "index": "def_inventory_airline_primary",
-                  "index_projection": {
-                    "primary_key": true
-                  },
-                  "keyspace": "airline",
-                  "namespace": "default",
-                  "scope": "inventory",
-                  "using": "gsi"
-                },
-                {
-                  "#operator": "Fetch",
-                  "bucket": "travel-sample",
-                  "keyspace": "airline",
-                  "namespace": "default",
-                  "scope": "inventory"
-                },
-                {
-                  "#operator": "Parallel",
-                  "~child": {
-                    "#operator": "Sequence",
-                    "~children": [
-                      {
-                        "#operator": "Filter",
-                        "condition": "((`airline`.`iata`) = \"U2\")"
-                      },
-                      {
-                        "#operator": "InitialProject",
-                        "result_terms": [
-                          {
-                            "expr": "self",
-                            "star": true
-                          }
-                        ]
-                      }
-                    ]
-                  }
-                }
-              ]
-            },
-            {
-              "#operator": "Stream"
-            }
-          ]
-        }
-      },
-      "queryContext": "travel-sample.inventory",
-      "reqType": "SELECT",
-      "signature": {
-        "*": "*"
-      },
-      "text": "PREPARE p1 AS SELECT * FROM airline WHERE iata = \"U2\";",
-      "useCBO": true
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "17.476424ms",
-    "executionTime": "17.187836ms",
-    "resultCount": 1,
-    "resultSize": 3298,
-    "serviceLoad": 12
-  }
-}
-----
-
-.List prepared statements
-[source,sqlpp]
-----
-SELECT *, meta().plan FROM system:prepareds;
-----
-
-[source,json]
-----
-{
-  "requestID": "d976e59a-d74e-4350-b0df-fa137099d594",
-  "signature": {
-    "*": "*",
-    "plan": "json"
-  },
-  "results": [
-    {
-      "plan": {
-        // ...
-      },
-      "prepareds": {
-        "encoded_plan": "H4sIAAAAAAAA/6RTUW/TPBT9K9H5XrbJ30QBMcmIhzJ1AjG0qh3wwKbEJLedmWt71061UIXfjpxkndYhENpbYt97z7nn+GxAtnQVVbk3ykICAgtSsWY6djayMwHy6JWAthXdjr3+TBy0s5Avh7N5qewHaoJXJQXIDSpaqNpEGVmtyfwf1MobOtR2TTY6bg6VZqMtQS6UCdQKWLUiSPgR+u9uFOTdIAg4T6yi4zT+v/sfjOt45Vj/IAh41mttaNmTONUhQn7d4FzxkuL9tL/SEpiyXkMepQ/nA+Sz9rIV+FleaVPtMpjTTU22TG19AZPtcP+5aMp6pbhJcr6AwLe6vO54P+CLQfR+n3zLPh/Y576fcleXe3bfqYydYxsMt/k1NZCR66T+9eAdJO4l+L0NoXQ+nWxhIVAHbZeQWAaNVjxc6YRiefWnXZ6EvYs2VayMIYMneXWiTSSGQOlspXvhsLdXDPyKw0KrqIr97E12gU/PL7D/iMh7q6NWZtpLDwGmUJuYR+JV6ADp1qfCQGaRVouKBzsu28s2vbYd4pFJrZDuBFJOtyE8Go0EbmriJqWVbmOfYKab86aTaz45nRyfJxC9tF2skyoHkDhAKzC0TGeT6Xg2yfwoG8+zvic7yE5mZx+z4oFpxePEZF/eTWaTLMmyFeV19zLo+O3ZsNivAAAA//+q+jhuaAQAAA==",
-        "featuresControl": 76,
-        "indexApiVersion": 4,
-        "indexScanKeyspaces": {
-          "default:travel-sample.inventory.airline": false
-        },
-        "name": "p1", // <1>
-        "namespace": "default",
-        "node": "127.0.0.1:8091",
-        "statement": "PREPARE p1 AS SELECT * FROM `travel-sample`.inventory.airline WHERE iata = \"U2\";",
-        "uses": 0
-      }
-    },
-    {
-      "plan": {
-        // ...
-      },
-      "prepareds": {
-        "encoded_plan": "H4sIAAAAAAAA/6STT28TMRDFv8rqcWkrExFAVDLiEKpUIIoaJQUOtNqY3Ulq6tju2Bt1iZbPjry7TWmKQKg3/xnPvPk9zwZkC1dSmXujLCQgsCAVK6YjZyM7EyAPXwloW9LNyOvPxEE7C/myP5sVyn6gOnhVUIDcoKSFqkyUkdWazNOgVt7QQNs12ei4HijNRluCXCgTqBGwakWQ8EN06zYV5G0iCDhPrKLjlP7J3QajKl461j8IAp71WhtadiJOdIiQXzc4U7ykeJftn7IEJqzXkIdp4XyAfNZcNAI/i0ttyl0FM7quyBbpWRfAZNu6/x00Yb1SXCecLyDwrSquWt339KKH3vWTb9Xnvfrcd1lu43LP7jsVsXVsg/42v6IaMnKV6F/13kHiDsGfbQiF8+lkWxYCVdB2CYll0GjE/ZaOKRaXf+vlUbV3q00UK2PI4FFeHWsTiSFQOFvqDhz29ua9vvlgrlVU8/3sTXaOT8/Psf9AyHuro1Zm0qGHAFOoTMwj8Sq0BenGp8BAZpFai4p7Oy6aiyb9th3hkUmtkO4E0pxuh/BwOBS4rojrNK108wDy4HevmK7P6pbibHwyPjpLtfXSttOeYB1A4gCNQJ9pMh1PRtNx5ofZaJZ1b7KD7Hh6+jHreWRf3o2n4ywx2RJ53X4LOnp72nf1KwAA////9+bsZQQAAA==",
-        "featuresControl": 76,
-        "indexApiVersion": 4,
-        "indexScanKeyspaces": {
-          "default:travel-sample.inventory.airline": false
-        },
-        "name": "p1(travel-sample.inventory)", // <2>
-        "namespace": "default",
-        "node": "127.0.0.1:8091",
-        "statement": "PREPARE p1 AS SELECT * FROM airline WHERE iata = \"U2\";",
-        "uses": 0
-      }
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "25.323496ms",
-    "executionTime": "25.173646ms",
-    "resultCount": 2,
-    "resultSize": 7891,
-    "serviceLoad": 12
-  }
-}
-----
-
-Note that the names of the prepared statements are identical, but they are associated with different query contexts.
-
-<.> The name of the prepared statement for the default query context
-<.> The name of the prepared statement showing the associated query context
-====
-
-[#sys-completed-req]
-== Monitor and Manage Completed Requests
-
-By default, the `system:completed_requests` catalog maintains a list of the most recent completed requests that have run longer than a predefined threshold of time.
-(You can also log completed requests that meet other conditions that you define.)
-
-For each completed request, this catalog maintains information such as requestId, statement text, prepared name (if prepared statement), request time, service time, and so on.
-This information provides a general insight into the health and performance of the query engine and the cluster.
-
-For field names and meanings, refer to xref:n1ql:n1ql-rest-api/admin.adoc#_requests[Requests].
-Most field names and meanings match exactly those of `system:active_requests`.
-
-Note that the `completed` state means that the request was started and completed by the Query service, but it does not mean that it was necessarily successful.
-The request could have been successful, or completed with errors.
-
-To find requests that completed successfully, search for completed requests whose `state` is `completed` and whose `errorCount` field has the value `0`.
-
-[NOTE]
-====
-Request profiling affects the `system:completed_requests` keyspace in the following ways:
-
-* When the feature is turned on, completed requests are stored with their execution plan.
-* Profiling information is likely to use 100KB+ per entry.
-* Due to the added overhead of running both profiling and logging, we recommend turning on both of them only when needed.
-Running only one of them continuously has no noticeable affect on performance.
-* Profiling does not carry any extra cost beyond memory for completed requests, so it's fine to run it continuously.
-====
-
-[[sys-completed-get]]
-=== Get Completed Requests
-
-To get a list of all logged completed requests using the Admin REST API:
-
-[source,sh]
-----
-curl -u Administrator:pword http://localhost:8093/admin/completed_requests
-----
-
-To get a list of all logged completed requests using {sqlpp}, including the query plan:
-
-[source,sqlpp]
-----
-SELECT *, meta().plan FROM system:completed_requests;
-----
-
-To get a list of all logged completed requests using {sqlpp}, including only successful requests:
-
-[source,sqlpp]
-----
-SELECT * FROM system:completed_requests
-WHERE state = "completed" AND errorCount = 0;
-----
-
-[[sys-completed-delete]]
-=== Purge the Completed Requests
-
-To purge a completed request [.var]`uuid` with the Admin REST API:
-
-[source,sh]
-----
-curl -u Administrator:pword -X DELETE http://localhost:8093/admin/completed_requests/uuid
-----
-
-To purge a completed request [.var]`uuid` with {sqlpp}:
-
-[source,sqlpp]
-----
-DELETE FROM system:completed_requests WHERE requestId = "uuid";
-----
-
-To purge the completed requests for a given time period, use:
-
-[source,sqlpp]
-----
-DELETE FROM system:completed_requests WHERE requestTime LIKE "2015-09-09%";
-----
-
-[[sys-completed-config]]
-=== Configure the Completed Requests
-
-You can configure the `system:completed_requests` keyspace by specifying parameters through the Admin API settings endpoint.
-
-In Couchbase Server 6.5 and later, you can specify the conditions for completed request logging using the `completed` field.
-
-This field takes a JSON object containing the names and values of logging qualifiers.
-Completed requests that meet the defined qualifiers are logged.
-
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u Administrator:password \
-  -H 'Content-Type: application/json' \
-  -d '{"completed": {"user": "marco", "error": 12003}}'
-----
-
-==== Logging Qualifiers
-
-You can specify the following logging qualifiers.
-A completed request is logged if _any_ of the qualifiers are met (logical OR).
-
-[horizontal]
-`threshold`:: The execution time threshold in milliseconds.
-`aborted`:: Whether to log requests that generate a panic.
-`error`:: Log requests returning this error number.
-`client`:: Log requests from this IP address.
-`user`:: Log requests with this user name.
-`context`:: Log requests with this client context ID.
-
-For full details, refer to xref:n1ql:n1ql-rest-api/admin.adoc#_logging_parameters[Logging parameters].
-
-The basic syntax adds a qualifier to the logging parameters, i.e. any existing qualifiers are not removed.
-You can change the value of a logging qualifier by specifying the same qualifier again with a new value.
-
-To add a new instance of an existing qualifier, use a plus sign (`+`) before the qualifier name, e.g. `+user`.
-To remove a qualifier, use a minus sign (`-`) before the qualifier name, e.g. `-user`.
-
-For example, the following request will add user `simon` to those tracked, and remove error `12003`.
-
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u Administrator:password \
-  -H 'Content-Type: application/json' \
-  -d '{"completed": {"+user": "simon", "-error": 12003}}'
-----
-
-Similarly, you could remove all logging by execution time with the following request, as long as the value matches the existing threshold.
-
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u Administrator:password \
-  -H 'Content-Type: application/json' \
-  -d '{"completed": {"-threshold": 1000}}'
-----
-
-==== Tagged Sets
-
-You can also specify qualifiers that have to be met as a group for the completed request to be logged (logical AND).
-
-To do this, specify the `tag` field along with a set of qualifiers, like so:
-
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u Administrator:password \
-  -H 'Content-Type: application/json' \
-  -d '{"completed": {"user": "marco", "error": 12003, "tag": "both_user_and_error"}}'
-----
-
-In this case, the request will be logged when both user and error match.
-
-The tag name can be any string that is meaningful and unique.
-Requests that match a tagged set of conditions are logged with a field `~tag`, which is set to the name of the tag.
-
-To add a qualifier to a tagged set, specify the tag name again along with the new qualifier:
-
-[source,sh]
-----
-curl http://localhost:8093/admin/settings -u Administrator:password \
-  -H 'Content-Type: application/json' \
-  -d '{"completed": {"client": "172.1.2.3", "tag": "both_user_and_error"}}'
-----
-
-You cannot add a new instance of an existing qualifier to a tagged set using a plus sign (`+`) before the qualifier name.
-For example, you cannot add a `user` qualifier to a tagged set that already contains a `user` qualifier.
-If you need to track two users with the same error, create two tagged sets, one per user.
-
-You can remove a qualifier from a tagged set using a minus sign (`-`) before the qualifier name, e.g. `-user`.
-When you remove the last qualifier from a tagged set, the tagged set is removed.
-
-[NOTE]
---
-You can specify multiple tagged sets.
-In this case, completed requests are logged if they match all of the qualifiers in any of the tagged sets.
-
-You can also specify a mixture of tagged sets and individual qualifiers.
-In this case, completed requests are logged if they match any of the individual qualifiers, or all of the qualifiers in any of the tagged sets.
---
-
-==== Completed Threshold
-
-The [.param]`completed-threshold` field provides another way of specifying the `threshold` qualifier within the `completed` field.
-
-This field sets the minimum request duration after which requests are added to the `system:completed_requests` catalog.
-The default value is 1000ms.
-Specify [.in]`0` to log all requests and [.in]`-1` to not log any requests to the keyspace.
-
-To specify a different value, use:
-
-[source,sh]
-----
-curl http://localhost:port/admin/settings -u user:pword \
-  -H 'Content-Type: application/json' \
-  -d '{"completed-threshold":0}'
-----
-
-==== Completed Limit
-
-The [.param]`completed-limit` field sets the number of most recent requests to be tracked in the `system:completed_requests` catalog.
-The default value is 4000.
-Specify [.in]`0` to not track any requests and [.in]`-1` to set no limit.
-
-To specify a different value, use:
-
-[source,sh]
-----
-curl http://localhost:port/admin/settings -u user:pword \
-  -H 'Content-Type: application/json' \
-  -d '{"completed-limit":1000}'
-----
-
-[[sys-completed-examples]]
-=== Examples
-
-[[example-2]]
-.Completed Request
-====
-First, using the cbq shell, we set `profile = "timings"` and run a long query which takes at least 1000ms (the default value of the `completed-threshold` query setting) to get registered in the `system:completed_requests` keyspace:
-
-.Query 1
-[source,sqlpp]
-----
-\set -profile "timings";
-SELECT * FROM `travel-sample`.inventory.route ORDER BY sourceairport;
-----
-
-Now, using the cbq shell, we change the profile setting to "phases" and rerun another long query:
-
-.Query 2
-[source,sqlpp]
-----
-\set -profile "phases";
-SELECT * FROM `travel-sample`.inventory.route ORDER BY destinationairport;
-----
-
-Run a query `system:completed_requests` keyspace with `meta().plan`.
-
-.Query 3
-[source,sqlpp]
-----
-SELECT meta().plan, * from system:completed_requests;
-----
-
-.Result
-[source,json]
-----
-{
-  "requestID": "4a36e1dc-cea0-4ba2-a428-258511d50582",
-  "signature": {
-    "*": "*",
-    "plan": "json"
-  },
-  "results": [
-    // ...
-    { // <1>
-      "completed_requests": {
-        "elapsedTime": "15.641879295s",
-        "errorCount": 0,
-        "node": "127.0.0.1:8091",
-        "phaseCounts": {
-          "fetch": 24024,
-          "primaryScan": 24024,
-          "sort": 24024
-        },
-        "phaseOperators": {
-          "authorize": 1,
-          "fetch": 1,
-          "primaryScan": 1,
-          "sort": 1
-        },
-        "phaseTimes": {
-          "authorize": "51.305µs",
-          "fetch": "3.27276723s",
-          "instantiate": "60.662µs",
-          "parse": "66.701943ms",
-          "plan": "15.12951ms",
-          "primaryScan": "171.439769ms",
-          "run": "15.548781894s",
-          "sort": "153.767638ms"
-        },
-        "remoteAddr": "172.17.0.1:56962",
-        "requestId": "08445bae-66ef-4ccd-8b2d-ea899b453a1b",
-        "requestTime": "2021-04-30T21:14:57.576Z",
-        "resultCount": 24024,
-        "resultSize": 81821919,
-        "scanConsistency": "unbounded",
-        "serviceTime": "15.630714144s",
-        "state": "completed",
-        "statement": "SELECT * FROM `travel-sample`.inventory.route ORDER BY destinationairport;",
-        "useCBO": true,
-        "userAgent": "Go-http-client/1.1 (CBQ/2.0)",
-        "users": "Administrator"
-      }
-    },
-    // ...
-    { // <2>
-      "completed_requests": {
-        "elapsedTime": "15.321128463s",
-        "errorCount": 0,
-        "node": "127.0.0.1:8091",
-        "phaseCounts": {
-          "fetch": 24024,
-          "primaryScan": 24024,
-          "sort": 24024
-        },
-        "phaseOperators": {
-          "authorize": 1,
-          "fetch": 1,
-          "primaryScan": 1,
-          "sort": 1
-        },
-        "phaseTimes": {
-          "authorize": "23.037µs",
-          "fetch": "3.092306635s",
-          "instantiate": "7.313569ms",
-          "parse": "579.368µs",
-          "plan": "2.449143ms",
-          "primaryScan": "153.686873ms",
-          "run": "15.29433203s",
-          "sort": "147.889352ms"
-        },
-        "remoteAddr": "172.17.0.1:56900",
-        "requestId": "5eefc9e5-bdaa-4824-bcd7-47977eb1f08a",
-        "requestTime": "2021-04-30T21:13:58.707Z",
-        "resultCount": 24024,
-        "resultSize": 81821919,
-        "scanConsistency": "unbounded",
-        "serviceTime": "15.30510306s",
-        "state": "completed",
-        "statement": "SELECT * FROM `travel-sample`.inventory.route ORDER BY sourceairport;",
-        "useCBO": true,
-        "userAgent": "Go-http-client/1.1 (CBQ/2.0)",
-        "users": "Administrator"
-      },
-      "plan": {
-        "#operator": "Authorize",
-        "#stats": {
-          "#phaseSwitches": 4,
-          "execTime": "1.725µs",
-          "servTime": "21.312µs"
-        },
-        "privileges": {
-          "List": [
-            {
-              "Priv": 7,
-              "Props": 0,
-              "Target": "default:travel-sample.inventory.route"
-            }
-          ]
-        },
-        "~child": {
-          "#operator": "Sequence",
-          "#stats": {
-            "#phaseSwitches": 2,
-            "execTime": "1.499µs"
-          },
-          "~children": [
-            {
-              "#operator": "PrimaryScan3",
-              "#stats": {
-                "#heartbeatYields": 6,
-                "#itemsOut": 24024,
-                "#phaseSwitches": 96099,
-                "execTime": "84.366121ms",
-                "kernTime": "3.021901421s",
-                "servTime": "69.320752ms"
-              },
-              "bucket": "travel-sample",
-              "index": "def_inventory_route_primary",
-              "index_projection": {
-                "primary_key": true
-              },
-              "keyspace": "route",
-              "namespace": "default",
-              "scope": "inventory",
-              "using": "gsi"
-            },
-            {
-              "#operator": "Fetch",
-              "#stats": {
-                "#heartbeatYields": 7258,
-                "#itemsIn": 24024,
-                "#itemsOut": 24024,
-                "#phaseSwitches": 99104,
-                "execTime": "70.34694ms",
-                "kernTime": "142.630196ms",
-                "servTime": "3.021959695s"
-              },
-              "bucket": "travel-sample",
-              "keyspace": "route",
-              "namespace": "default",
-              "scope": "inventory"
-            },
-            {
-              "#operator": "InitialProject",
-              "#stats": {
-                "#itemsIn": 24024,
-                "#itemsOut": 24024,
-                "#phaseSwitches": 96100,
-                "execTime": "15.331951ms",
-                "kernTime": "3.219612458s"
-              },
-              "result_terms": [
-                {
-                  "expr": "self",
-                  "star": true
-                }
-              ]
-            },
-            {
-              "#operator": "Order",
-              "#stats": {
-                "#itemsIn": 24024,
-                "#itemsOut": 24024,
-                "#phaseSwitches": 72078,
-                "execTime": "147.889352ms",
-                "kernTime": "3.229055752s"
-              },
-              "sort_terms": [
-                {
-                  "expr": "(`route`.`sourceairport`)"
-                }
-              ]
-            },
-            {
-              "#operator": "Stream",
-              "#stats": {
-                "#itemsIn": 24024,
-                "#itemsOut": 24024,
-                "#phaseSwitches": 24025,
-                "execTime": "11.851634134s"
-              }
-            }
-          ]
-        },
-        "~versions": [
-          "7.0.0-N1QL",
-          "7.0.0-4960-enterprise"
-        ]
-      }
-    }
-  ],
-  "status": "success",
-  "metrics": {
-    "elapsedTime": "172.831251ms",
-    "executionTime": "172.586836ms",
-    "resultCount": 40,
-    "resultSize": 65181,
-    "serviceLoad": 12
-  },
-  "profile": { // <3>
-    "phaseTimes": {
-      "authorize": "19.912µs",
-      "fetch": "123.022426ms",
-      "instantiate": "29.424µs",
-      "parse": "6.414711ms",
-      "plan": "3.19076ms",
-      "primaryScan": "55.521683ms",
-      "run": "158.514001ms"
-    },
-    "phaseCounts": {
-      "fetch": 40,
-      "primaryScan": 40
-    },
-    "phaseOperators": {
-      "authorize": 1,
-      "fetch": 1,
-      "primaryScan": 1
-    },
-    "requestTime": "2021-04-30T21:15:18.104Z",
-    "servicingHost": "127.0.0.1:8091"
-  }
-}
-----
-
-This example shows:
-
-<1> The profile attribute with all phases-related statistics for Query 2.
-<2> The `meta().plan` with all detailed statistics collected for Query 1.
-<3> The profile attribute with all phases-related statistics for this query itself, which is querying the `system:completed_requests` keyspace.
-====
-
-[#sys_my-user-info]
-== Monitor Your User Info
-
-The `system:my_user_info` catalog maintains a list of all information of your profile.
-
-To see your current information, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:my_user_info;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "my_user_info": {
-      "domain": "local",
-      "external_groups": [],
-      "groups": [],
-      "id": "jane",
-      "name": "Jane Doe",
-      "password_change_date": "2019-05-07T02:31:53.000Z",
-      "roles": [
-        {
-          "origins": [
-            {
-              "type": "user"
-            }
-          ],
-          "role": "admin"
-        }
-      ]
-    }
-  }
-]
-----
-
-[#sys-user-info]
-== Monitor All User Info
-
-The `system:user_info` catalog maintains a list of all current users in your bucket and their information.
-
-To see the list of all current users, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:user_info;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "user_info": {
-      "domain": "local",
-      "external_groups": [],
-      "groups": [],
-      "id": "jane",
-      "name": "Jane Doe",
-      "password_change_date": "2019-05-07T02:31:53.000Z",
-      "roles": [
-        {
-          "origins": [
-            {
-              "type": "user"
-            }
-          ],
-          "role": "admin"
-        }
-      ]
-    }
-  },
-  {
-    "user_info": {
-      "domain": "ns_server",
-      "id": "Administrator",
-      "name": "Administrator",
-      "roles": [
-        {
-          "role": "admin"
-        }
-      ]
-    }
-  }
-]
-----
-
-[#sys-nodes]
-== Monitor Nodes
-
-The `system:nodes` catalog shows the datastore topology information.
-This is separate from the Query clustering view, in that Query clustering shows a map of the Query cluster, as provided by the cluster manager, while `system:nodes` shows a view of the nodes and services that make up the actual datastore, which may or may not include Query nodes.
-
-* The dichotomy is important in that Query nodes could be clustered by one entity (e.g. Zookeeper) and be connected to a clustered datastore (e.g. Couchbase) such that each does not have visibility of the other.
-* Should {sqlpp} be extended to be able to concurrently connect to multiple datastores, each datastore will report its own topology, so that `system:nodes` offers a complete view of all the storage nodes, whatever those may be.
-* The `system:nodes` keyspace provides a way to report services advertised by each node as well as services that are actually running.
-This is datastore dependent.
-* Query clustering is still reported by the `/admin` endpoints.
-
-To see the list of all current node information, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:nodes;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "nodes": {
-      "name": "127.0.0.1:8091",
-      "ports": {
-        "cbas": 8095,
-        "cbasAdmin": 9110,
-        "cbasCc": 9111,
-        "cbasSSL": 18095,
-        "eventingAdminPort": 8096,
-        "eventingSSL": 18096,
-        "fts": 8094,
-        "ftsSSL": 18094,
-        "indexAdmin": 9100,
-        "indexHttp": 9102,
-        "indexHttps": 19102,
-        "indexScan": 9101,
-        "indexStreamCatchup": 9104,
-        "indexStreamInit": 9103,
-        "indexStreamMaint": 9105,
-        "kv": 11210,
-        "kvSSL": 11207,
-        "n1ql": 8093,
-        "n1qlSSL": 18093
-      },
-      "services": [
-        "cbas",
-        "eventing",
-        "fts",
-        "index",
-        "kv",
-        "n1ql"
-      ]
-    }
-  }
-]
-----
-
-[#sys-app-roles]
-== Monitor Applicable Roles
-
-The `system:applicable_roles` catalog maintains a list of all applicable roles and grantee of each bucket.
-
-To see the list of all current applicable role information, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:applicable_roles;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "applicable_roles": {
-      "grantee": "anil",
-      "role": "replication_admin"
-    }
-  },
-  {
-    "applicable_roles": {
-      "bucket_name": "travel-sample",
-      "grantee": "anil",
-      "role": "select"
-    }
-  },
-  {
-    "applicable_roles": {
-      "bucket_name": "*",
-      "grantee": "anil",
-      "role": "select"
-    }
-  }
-]
-----
-
-For more examples, take a look at the blog: https://blog.couchbase.com/optimize-n1ql-performance-using-request-profiling/[Optimize {sqlpp} performance using request profiling^].
-
-[#sys-dictionary]
-== Monitor Statistics
-
-The `system:dictionary` catalog maintains a list of the on-disk optimizer statistics stored in the `_query` collection within the `_system` scope.
-
-If you have multiple query nodes, the data retrieved from this catalog will be the same, regardless of the node on which you run the query.
-
-To see the list of on-disk optimizer statistics, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:dictionary;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "dictionary": {
-      "avgDocKeySize": 12,
-      "avgDocSize": 278,
-      "bucket": "travel-sample",
-      "distributionKeys": [
-        "airportname",
-        "faa",
-        "city"
-      ],
-      "docCount": 1968,
-      "indexes": [
-        {
-          "indexId": "bc3048e87bf84828",
-          "indexName": "def_inventory_airport_primary",
-          "indexStats": [
-            {
-              "avgItemSize": 24,
-              "avgPageSize": 11760,
-              "numItems": 1968,
-              "numPages": 4,
-              "resRatio": 1
-            }
-          ]
-        },
-        // ...
-      ],
-      "keyspace": "airport",
-      "namespace": "default",
-      "scope": "inventory"
-    }
-  },
-  // ...
-]
-----
-
-This catalog contains an array of dictionaries, one for each keyspace for which optimizer statistics are available.
-Each dictionary gives the following information:
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**avgDocKeySize** +
-__required__
-|Average doc key size.
-|Integer
-
-|**avgDocSize** +
-__required__
-|Average doc size.
-|Integer
-
-|**bucket** +
-__required__
-|The bucket for which statistics are available.
-|String
-
-|**keyspace** +
-__required__
-|The keyspace for which statistics are available.
-|String
-
-|**namespace** +
-__required__
-|The namespace for which statistics are available.
-|String
-
-|**scope** +
-__required__
-|The scope for which statistics are available.
-|String
-
-|**distributionKeys** +
-__required__
-|Distribution keys for which histograms are available.
-|String array
-
-|**docCount** +
-__required__
-|Document count.
-|Integer
-
-|**indexes** +
-__required__
-|An array of indexes in this keyspace for which statistics are available.
-|<> array
-
-|**node** +
-__required__
-|The query node where this dictionary cache is resident.
-|String
-|===
-
-[[indexes]]
-**Indexes**
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**indexId** +
-__required__
-|The index ID.
-|String
-
-|**indexName** +
-__required__
-|The index name.
-|String
-
-|**indexStats** +
-__required__
-|An array of statistics for each index, with one element for each index partition.
-|<> array
-|===
-
-[[indexStats]]
-**Index Statistics**
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**avgItemSize** +
-__required__
-|Average item size.
-|Integer
-
-|**avgPageSize** +
-__required__
-|Average page size.
-|Integer
-
-|**numItems** +
-__required__
-|Number of items.
-|Integer
-
-|**numPages** +
-__required__
-|Number of pages.
-|Integer
-
-|**resRatio** +
-__required__
-|Resident ratio.
-|Integer
-|===
-
-For further details, refer to xref:n1ql:n1ql-language-reference/updatestatistics.adoc[UPDATE STATISTICS].
-
-[#sys-dictionary-cache]
-== Monitor Cached Statistics
-
-The `system:dictionary_cache` catalog maintains a list of the in-memory cached subset of the optimizer statistics.
-
-If you have multiple query nodes, the data retrieved from this node shows cached optimizer statistics from all nodes.
-Individual nodes may have a different subset of cached information.
-
-To see the list of in-memory optimizer statistics, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:dictionary_cache;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "dictionary_cache": {
-      "avgDocKeySize": 12,
-      "avgDocSize": 278,
-      "bucket": "travel-sample",
-      "distributionKeys": [
-        "airportname",
-        "faa",
-        "city"
-      ],
-      "docCount": 1968,
-      "indexes": [
-        {
-          "indexId": "bc3048e87bf84828",
-          "indexName": "def_inventory_airport_primary",
-          "indexStats": [
-            {
-              "avgItemSize": 24,
-              "avgPageSize": 11760,
-              "numItems": 1968,
-              "numPages": 4,
-              "resRatio": 1
-            }
-          ]
-        },
-        // ...
-      ],
-      "keyspace": "airport",
-      "namespace": "default",
-      "node": "172.23.0.3:8091",
-      "scope": "inventory"
-    }
-  },
-  // ...
-]
-----
-
-This catalog contains an array of dictionary caches, one for each keyspace for which optimizer statistics are available.
-Each dictionary cache gives the same information as the <> catalog.
-
-For further details, refer to xref:n1ql:n1ql-language-reference/updatestatistics.adoc[UPDATE STATISTICS].
-
-[#sys-functions]
-== Monitor Functions
-
-The `system:functions` catalog maintains a list of all user-defined functions across all nodes.
-To see the list of all user-defined functions, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:functions;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "functions": {
-      "definition": {
-        "#language": "inline",
-        "expression": "(((`fahrenheit` - 32) * 5) / 9)",
-        "parameters": [
-          "fahrenheit"
-        ],
-        "text": "((fahrenheit - 32) * 5/9)"
-      },
-      "identity": {
-        "bucket": "travel-sample",
-        "name": "celsius",
-        "namespace": "default",
-        "scope": "inventory",
-        "type": "scope"
-      }
-    }
-  },
-  {
-    "functions": {
-      "definition": {
-        "#language": "javascript",
-        "library": "geohash-js",
-        "name": "geohash-js",
-        "object": "calculateAdjacent",
-        "parameters": [
-          "src",
-          "dir"
-        ]
-      },
-      "identity": {
-        "name": "adjacent",
-        "namespace": "default",
-        "type": "global"
-      }
-    }
-  },
-  // ...
-]
-----
-
-This catalog contains the following attributes:
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**definition** +
-__required__
-|The definition of the function.
-|<> object
-
-|**identity** +
-__required__
-|The identity of the function.
-|<> object
-|===
-
-[[definition]]
-**Definition**
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**#language** +
-__required__
-|The language of the function.
-
-*Example*: `inline`
-|String
-
-|**parameters** +
-__required__
-|The parameters required by the function.
-|String array
-
-|**expression** +
-__optional__
-|For inline functions only: the expression defining the function.
-|String
-
-|**text** +
-__optional__
-|For inline functions: the verbatim text of the function.
-
-'''
-
-For {sqlpp} managed user-defined functions: the external code defining the function.
-|String
-
-|**library** +
-__optional__
-|For external functions only: the library containing the function.
-|String
-
-|**name** +
-__optional__
-|For external functions only: the relative name of the library.
-|String
-
-|**object** +
-__optional__
-|For external functions only: the object defining the function.
-|String
-|===
-
-[[identity]]
-**Identity**
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**name** +
-__required__
-|The name of the function.
-|String
-
-|**namespace** +
-__required__
-|The namespace of the function.
-
-*Example*: `default`
-|String
-
-|**type** +
-__required__
-|The type of the function.
-
-*Example*: `global`
-|String
-
-|**bucket** +
-__optional__
-|For scoped functions only: the bucket containing the function.
-|String
-
-|**scope** +
-__optional__
-|For scoped functions only: the scope containing the function.
-|String
-|===
-
-[#sys-functions-cache]
-== Monitor Cached Functions
-
-The `system:functions_cache` catalog maintains a list of recently-used user-defined functions across all nodes.
-The catalog also lists user-defined functions that have been called recently, but do not exist.
-To see the list of recently-used user-defined functions, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:functions_cache;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "functions_cache": {
-      "#language": "inline",
-      "avgServiceTime": "3.066847ms",
-      "expression": "(((`fahrenheit` - 32) * 5) / 9)",
-      "lastUse": "2022-03-09 00:17:59.60659793 +0000 UTC m=+35951.429537902",
-      "maxServiceTime": "3.066847ms",
-      "minServiceTime": "0s",
-      "name": "celsius",
-      "namespace": "default",
-      "node": "127.0.0.1:8091",
-      "parameters": [
-        "fahrenheit"
-      ],
-      "scope": "inventory",
-      "text": "((fahrenheit - 32) * 5/9)",
-      "type": "scope",
-      "uses": 1
-    }
-  },
-  {
-    "functions_cache": {
-      "#language": "javascript",
-      "avgServiceTime": "56.892636ms",
-      "lastUse": "2022-03-09 00:15:46.289934029 +0000 UTC m=+35818.007560703",
-      "library": "geohash-js",
-      "maxServiceTime": "146.025426ms",
-      "minServiceTime": "0s",
-      "name": "geohash-js",
-      "namespace": "default",
-      "node": "127.0.0.1:8091",
-      "object": "calculateAdjacent",
-      "parameters": [
-        "src",
-        "dir"
-      ],
-      "type": "global",
-      "uses": 4
-    }
-  },
-  {
-    "functions_cache": {
-      "avgServiceTime": "3.057421ms",
-      "lastUse": "2022-03-09 00:17:25.396840275 +0000 UTC m=+35917.199008929",
-      "maxServiceTime": "3.057421ms",
-      "minServiceTime": "0s",
-      "name": "notFound",
-      "namespace": "default",
-      "node": "127.0.0.1:8091",
-      "type": "global",
-      "undefined_function": true,
-      "uses": 1
-    }
-  }
-]
-----
-
-This catalog contains the following attributes:
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**#language** +
-__required__
-|The language of the function.
-
-*Example*: `inline`
-|String
-
-|**name** +
-__required__
-|The name of the function.
-|String
-
-|**namespace** +
-__required__
-|The namespace of the function.
-
-*Example*: `default`
-|String
-
-|**parameters** +
-__required__
-|The parameters required by the function.
-|String array
-
-|**type** +
-__required__
-|The type of the function.
-
-*Example*: `global`
-|String
-
-|**scope** +
-__optional__
-|For scoped functions only: the scope containing the function.
-|String
-
-|**expression** +
-__optional__
-|For inline functions only: the expression defining the function.
-|String
-
-|**text** +
-__optional__
-|For inline functions: the verbatim text of the function.
-
-'''
-
-For {sqlpp} managed user-defined functions: the external code defining the function.
-|String
-
-|**library** +
-__optional__
-|For external functions only: the library containing the function.
-|String
-
-|**object** +
-__optional__
-|For external functions only: the object defining the function.
-|String
-
-|**avgServiceTime** +
-__required__
-|The mean service time for the function.
-|String
-
-|**lastUse** +
-__required__
-|The date and time when the function was last used.
-|String
-
-|**maxServiceTime** +
-__required__
-|The maximum service time for the function.
-|String
-
-|**minServiceTime** +
-__required__
-|The minimum service time for the function.
-|String
-
-|**node** +
-__required__
-|The query node where the function is cached.
-|String
-
-|**undefined_function** +
-__required__
-|Whether the function exists or is undefined.
-|Boolean
-
-|**uses** +
-__required__
-|The number of uses of the function.
-|Number
-|===
-
-Each query node keeps its own cache of recently-used user-defined functions, so you may see the same function listed for multiple nodes.
-
-[#sys-tasks-cache]
-== Monitor Cached Tasks
-
-The `system:tasks_cache` catalog maintains a list of recently-used scheduled tasks, such as index advisor sessions.
-To see the list of recently-used scheduled tasks, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:tasks_cache;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "tasks_cache": {
-      "class": "advisor",
-      "delay": "1h0m0s",
-      "id": "bcd9f8e4-b324-504c-a98b-ace90dba869f",
-      "name": "aa7f688a-bf29-438f-888f-eeaead87ca40",
-      "node": "10.143.192.101:8091",
-      "state": "scheduled",
-      "subClass": "analyze",
-      "submitTime": "2019-09-17 05:18:12.903122381 -0700 PDT m=+8460.550715992"
-    }
-  },
-  {
-    "tasks_cache": {
-      "class": "advisor",
-      "delay": "5m0s",
-      "id": "254abec5-5782-543e-9ee0-d07da146b94e",
-      "name": "ca2cfe56-01fa-4563-8eb0-a753af76d865",
-      "node": "10.143.192.101:8091",
-      "results": [
-        // ...
-      ],
-      "startTime": "2019-09-17 05:03:31.821597725 -0700 PDT m=+7579.469191487",
-      "state": "completed",
-      "stopTime": "2019-09-17 05:03:31.963133954 -0700 PDT m=+7579.610727539",
-      "subClass": "analyze",
-      "submitTime": "2019-09-17 04:58:31.821230131 -0700 PDT m=+7279.468823737"
-    }
-  }
-]
-----
-
-This catalog contains the following attributes:
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**class** +
-__required__
-|The class of the task.
-
-*Example*: ``advisor``
-|string
-
-|**delay** +
-__required__
-|The scheduled duration of the task.
-|string
-
-|**id** +
-__required__
-|The internal ID of the task.
-|string
-
-|**name** +
-__required__
-|The name of the task.
-|string
-
-|**node** +
-__required__
-|The node where the task was started.
-|string
-
-|**state** +
-__required__
-|The state of the task.
-
-*Values*: `scheduled`, `cancelled`, `completed`
-|string
-
-|**subClass** +
-__required__
-|The subclass of the task.
-
-*Example*: `analyze`
-|string
-
-|**submitTime** +
-__required__
-|The date and time when the task was submitted.
-|string
-
-|**results** +
-__optional__
-|Not scheduled tasks: the results of the task.
-|Any array
-
-|**startTime** +
-__optional__
-|Not scheduled tasks: the date and time when the task started.
-|string (date-time)
-
-|**stopTime** +
-__optional__
-|Not scheduled tasks: the date and time when the task stopped.
-|string (date-time)
-|===
-
-Refer to xref:n1ql:n1ql-language-reference/advisor.adoc[ADVISOR Function] for more information on index advisor sessions.
-
-[#sys-transactions]
-== Monitor Transactions
-
-The `system:transactions` catalog maintains a list of active Couchbase transactions.
-To see the list of active transactions, use:
-
-[source,sqlpp]
-----
-SELECT * FROM system:transactions;
-----
-
-This will result in a list similar to:
-
-[source,json]
-----
-[
-  {
-    "transactions": {
-      "durabilityLevel": "majority",
-      "durabilityTimeout": "2.5s",
-      "expiryTime": "2021-04-21T12:53:48.598+01:00",
-      "id": "85aea637-2288-434b-b7c5-413ad8e7c175",
-      "isolationLevel": "READ COMMITED",
-      "lastUse": "2021-04-21T12:51:48.598+01:00",
-      "node": "127.0.0.1:8091",
-      "numAtrs": 1024,
-      "scanConsistency": "unbounded",
-      "status": 0,
-      "timeout": "2m0s",
-      "usedMemory": 960,
-      "uses": 1
-    }
-  // ...
-  }
-]
-----
-
-This catalog contains the following attributes:
-
-[options="header", cols="~a,~a,~a"]
-|===
-|Name|Description|Schema
-
-|**durabilityLevel** +
-__required__
-|Durability level for all mutations within a transaction.
-|string
-
-|**durabilityTimeout** +
-__required__
-|Durability timeout per mutation within the transaction.
-|string (duration)
-
-|**expiryTime** +
-__required__
-|
-|string (date-time)
-
-|**id** +
-__required__
-|The transaction ID.
-|string
-
-|**isolationLevel** +
-__required__
-|The isolation level of the transaction.
-|string
-
-|**lastUse** +
-__required__
-|
-|string (date-time)
-
-|**node** +
-__required__
-|The node where the transaction was started.
-|string
-
-|**numAtrs** +
-__required__
-|The total number of active transaction records.
-|integer
-
-|**scanConsistency** +
-__required__
-|The transactional scan consistency.
-|string
-
-|**status** +
-__required__
-|
-|integer
-
-|**timeout** +
-__required__
-|The transaction timeout duration.
-|string (duration)
-
-|**usedMemory** +
-__required__
-|
-|integer
-
-|**uses** +
-__required__
-|
-|integer
-|===
-
-Refer to xref:n1ql:n1ql-language-reference/transactions.adoc[{sqlpp} Support for Couchbase Transactions] for more information.
-
-== Related Links
-
-* Refer to xref:n1ql:n1ql-intro/sysinfo.adoc[Getting System Information] for more information on the system namespace.
diff --git a/modules/metrics-reference/attachments/fts_metrics_metadata.json b/modules/metrics-reference/attachments/fts_metrics_metadata.json
index ec6c3fd847..2914aa24ee 100644
--- a/modules/metrics-reference/attachments/fts_metrics_metadata.json
+++ b/modules/metrics-reference/attachments/fts_metrics_metadata.json
@@ -504,5 +504,10 @@
       "index"
     ],
     "type": "counter"
+  },
+  "fts_total_vectors": {
+    "added": "7.6.1",
+    "help": "The total number of vectors indexed",
+    "type": "gauge"
   }
 }
diff --git a/modules/n1ql/pages/n1ql-rest-api/intro.adoc b/modules/n1ql/pages/n1ql-rest-api/intro.adoc
index 0dde9ab412..b1b3e8f128 100644
--- a/modules/n1ql/pages/n1ql-rest-api/intro.adoc
+++ b/modules/n1ql/pages/n1ql-rest-api/intro.adoc
@@ -44,4 +44,4 @@ include::rest-api:partial$rest-query-service-table.adoc[tags=query-functions]
 
 == See Also
 
-For an explanation of how cluster-level settings, node-level settings, and request-level parameters interact, see xref:settings:query-settings.adoc[].
+For an explanation of how cluster-level settings, node-level settings, and request-level parameters interact, see xref:n1ql:n1ql-manage/query-settings.adoc[].
diff --git a/modules/release-notes/pages/relnotes.adoc b/modules/release-notes/pages/relnotes.adoc
index 2731519ea9..a089ffac5f 100644
--- a/modules/release-notes/pages/relnotes.adoc
+++ b/modules/release-notes/pages/relnotes.adoc
@@ -1,6 +1,7 @@
 = Release Notes for Couchbase Server 7.6
 :page-aliases: analytics:releasenote
 :description: Couchbase Server 7.6.0 introduces multiple new features and fixes, as well as some deprecations and removals.
+:page-toclevels: 2
 
 include::partial$docs-server-7.6.1-release-note.adoc[]
 
diff --git a/modules/release-notes/partials/docs-server-7.6-release-note.adoc b/modules/release-notes/partials/docs-server-7.6-release-note.adoc
index 3848a06390..0b77f5c04a 100644
--- a/modules/release-notes/partials/docs-server-7.6-release-note.adoc
+++ b/modules/release-notes/partials/docs-server-7.6-release-note.adoc
@@ -189,10 +189,21 @@ The user will now be able to run queries via {sqlpp} without having to run the k
 |===
 
 [#known-issues-760]
-== Known Issues
+=== Known Issues
 
 This release contains the following known issues:
 
+==== .NET SDK Compatibility
+[#table-known-issues-760-dotnet-sdk, cols="10,40,40"]
+|===
+|Issue | Description | Workaround
+
+| https://issues.couchbase.com/browse/NCBC-3724[NCBC-3724]
+| Versions of the .NET SDK earlier than 3.5.1 have compatibility issues with Couchbase Server 7.6.
+| Use version 3.5.1 or later of the .NET SDK with Couchbase Server 7.6.
+
+|===
+
 ==== User Interface
 [#table-known-issues-760-user-interface, cols="10,40,40"]
 |===
@@ -203,7 +214,7 @@ This release contains the following known issues:
 | NA
 |===
 
-=== Failover
+==== Failover
 [#table-known-issues-760-failover, cols="10,40,40"]
 |===
 |Issue | Description | Workaround
@@ -214,7 +225,7 @@ This release contains the following known issues:
 For more information on the auto-failover settings, see the documentation.
 |===
 
-=== Tools
+==== Tools
 [#table-known-issues-760-tools, cols="10,40,40"]
 |===
 |Issue | Description | Workaround
@@ -224,7 +235,7 @@ For more information on the auto-failover settings, see the documentation.
 | Merge backups manually using the UI or using the API.
 |===
 
-=== Storage Engine
+==== Storage Engine
 [#table-known-issues-760-storage-engine, cols="10,40,40"]
 |===
 |Issue | Description | Workaround
@@ -232,9 +243,15 @@ For more information on the auto-failover settings, see the documentation.
 | https://issues.couchbase.com/browse/MB-61154[MB-61154]
 | In situations where bucket data exceeds 4 TB and Magma is being used as the storage engine, it is possible for rebalance to hang and fail to run to completion. 
 | NA
-|====
-
-
-
+|===
 
+==== Search Service
+[#table-known-issues-760-search-service, cols="10,40,40"]
+|===
+|Issue | Description | Workaround
 
+| https://issues.couchbase.com/browse/MB-60719[MB-60719]
+| Older SDKs might have failed operations when you access the Search Service with the `disableScoring` option set to false.
+This is a breaking change due to a change in the response payload.
+| Set the `disableScoring` option in SDKs to true.
+|===
diff --git a/modules/release-notes/partials/docs-server-7.6.1-release-note.adoc b/modules/release-notes/partials/docs-server-7.6.1-release-note.adoc
index e2c5e510f9..f61d43f7b7 100644
--- a/modules/release-notes/partials/docs-server-7.6.1-release-note.adoc
+++ b/modules/release-notes/partials/docs-server-7.6.1-release-note.adoc
@@ -31,3 +31,18 @@ This release contains the following fixes:
 | NA
 |===
 
+[#known-issues-761]
+=== Known Issues
+
+This release contains the following known issue:
+
+==== Search Service
+[#table-known-issues-761-search-service, cols="10,40,40"]
+|===
+|Issue | Description | Workaround
+
+| https://issues.couchbase.com/browse/MB-60719[MB-60719]
+| Older SDKs might have failed operations when you access the Search Service with the `disableScoring` option set to false.
+This is a breaking change due to a change in the response payload.
+| Set the `disableScoring` option in SDKs to true.
+|===
diff --git a/modules/rest-api/examples/beer-sample-task-status.json b/modules/rest-api/examples/beer-sample-task-status.json
new file mode 100644
index 0000000000..e796290e0c
--- /dev/null
+++ b/modules/rest-api/examples/beer-sample-task-status.json
@@ -0,0 +1,9 @@
+[
+    {
+      "task_id": "439b29de-0018-46ba-83c3-d3f58be68b12",
+      "status": "running",
+      "type": "loadingSampleBucket",
+      "bucket": "beer-sample",
+      "bucket_uuid": "not_present"
+    }
+]
diff --git a/modules/rest-api/examples/install-sample-bucket.sh b/modules/rest-api/examples/install-sample-bucket.sh
new file mode 100644
index 0000000000..0a4ed4a3a2
--- /dev/null
+++ b/modules/rest-api/examples/install-sample-bucket.sh
@@ -0,0 +1,3 @@
+curl -X POST -u Administrator:password \
+     http://localhost:8091/sampleBuckets/install \
+     -d '["travel-sample", "beer-sample"]' | jq .
\ No newline at end of file
diff --git a/modules/rest-api/examples/sample-bucket-install-response.json b/modules/rest-api/examples/sample-bucket-install-response.json
new file mode 100644
index 0000000000..572ab0a7be
--- /dev/null
+++ b/modules/rest-api/examples/sample-bucket-install-response.json
@@ -0,0 +1,14 @@
+{
+  "tasks": [
+    {
+      "taskId": "439b29de-0018-46ba-83c3-d3f58be68b12",
+      "sample": "beer-sample",
+      "bucket": "beer-sample"
+    },
+    {
+      "taskId": "ed6cd88e-d704-4f91-8dd3-543e03669024",
+      "sample": "travel-sample",
+      "bucket": "travel-sample"
+    }
+  ]
+}
\ No newline at end of file
diff --git a/modules/rest-api/examples/sample-bucket-tasks.json b/modules/rest-api/examples/sample-bucket-tasks.json
new file mode 100644
index 0000000000..b408d34fdf
--- /dev/null
+++ b/modules/rest-api/examples/sample-bucket-tasks.json
@@ -0,0 +1,18 @@
+[
+    {
+      "statusId": "e36859e44fb7c226c180b4610313f074",
+      "type": "rebalance",
+      "subtype": "rebalance",
+      "status": "notRunning",
+      "statusIsStale": false,
+      "masterRequestTimedOut": false,
+      "lastReportURI": "/logs/rebalanceReport?reportID=af5b2ac96af031218bae6e3411b007b5"
+    },
+    {
+      "task_id": "439b29de-0018-46ba-83c3-d3f58be68b12",
+      "status": "running",
+      "type": "loadingSampleBucket",
+      "bucket": "beer-sample",
+      "bucket_uuid": "not_present"
+    }
+]
diff --git a/modules/rest-api/examples/sample_bucket_load_status.sh b/modules/rest-api/examples/sample_bucket_load_status.sh
new file mode 100644
index 0000000000..e178b66f02
--- /dev/null
+++ b/modules/rest-api/examples/sample_bucket_load_status.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+echo -e "Executing command:\ncurl -s -X POST -u Administrator:password http://node1:8091/sampleBuckets/install -d '[\"travel-sample\", \"beer-sample\"]'"
+
+taskIds=$(curl -s -X POST -u Administrator:password http://node1:8091/sampleBuckets/install -d '["travel-sample", "beer-sample"]' \
+        | jq . | tee install_output.json | jq '.tasks[] | .taskId' | tr -d '"' )
+
+cat install_output.json
+
+echo -e "\ntaskIds: $taskIds\n\n"
+
+sleep 2
+
+echo -e\n\nFull task list:
+
+curl -s -u  Administrator:password  -X GET  http://localhost:8091/pools/default/tasks | jq 
+
+for id in ${taskIds}; do
+    echo -e "\ntaskId: $id"
+    echo "Running command: curl -s -u Administrator:password  -G http://localhost:8091/pools/default/tasks -d \"taskId=$id\" | jq '.' "
+    curl -s -u Administrator:password  -G http://localhost:8091/pools/default/tasks -d "taskId=$id" | jq '.' 
+done
diff --git a/modules/rest-api/pages/backup-archive-a-repository.adoc b/modules/rest-api/pages/backup-archive-a-repository.adoc
index 4ddb2acc13..09fcf92fe3 100644
--- a/modules/rest-api/pages/backup-archive-a-repository.adoc
+++ b/modules/rest-api/pages/backup-archive-a-repository.adoc
@@ -16,7 +16,7 @@ POST /repository/active//archive
 
 Archives the specified repository.
 This means that no further scheduled or manually triggered tasks can be run on the repository; with the exception of those that _retrieve information_, _restore data_,  and _examine data_.
-(See xref:rest-api:backup-get-repository-info.adoc[Get Information on Repositories], xref:rest-api:backup-restore-data.adoc[Restore Data], and xref:rest-api:backup-examine-data.adoc[Examine Backed-Up Data], respectively.)
+(See xref:rest-api:backup-get-repository-info.adoc[Get Backup Repository Information], xref:rest-api:backup-restore-data.adoc[Restore Data], and xref:rest-api:backup-examine-data.adoc[Examine Backed-Up Data], respectively.)
 
 Note that a repository that has been archived _cannot_ be returned to active state.
 
@@ -74,6 +74,6 @@ Successful execution returns `200 OK`.
 An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
 A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
 
-Information on getting information from an archived repository is provided in xref:rest-api:backup-get-repository-info.adoc[Get Information on Repositories].
+Information on getting information from an archived repository is provided in xref:rest-api:backup-get-repository-info.adoc[Get Backup Repository Information].
 Information on restoring data from an archived repository is provided in xref:rest-api:backup-restore-data.adoc[Restore Data].
 Information on examining data within an archived repository is provided in xref:rest-api:backup-examine-data.adoc[Examine Backed-Up Data].
diff --git a/modules/rest-api/pages/backup-get-cluster-info.adoc b/modules/rest-api/pages/backup-get-cluster-info.adoc
index 193d0806ad..3f9f4804e1 100644
--- a/modules/rest-api/pages/backup-get-cluster-info.adoc
+++ b/modules/rest-api/pages/backup-get-cluster-info.adoc
@@ -8,7 +8,7 @@
 == HTTP Methods and URIs
 
 ----
-GET /cluster/self
+GET /api/v1/cluster/self
 ----
 
 [#description]
@@ -20,12 +20,15 @@ Returns a JSON document that contains subdocuments for active, imported, and arc
 == Curl Syntax
 
 ----
-curl -X GET http://:8097/cluster/self
-  -u :
+curl -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/api/v1/cluster/self
+  -u $USERNAME:$PASSWORD
 ----
 
 Only the host cluster (`self`) can be queried.
-The `username` and `password` must be those of a user with the `Full Admin` role.
+
+== Required Permissions
+
+Full Admin, Backup Admin, or Read-Only Admin role.
 
 [#responses]
 == Responses
@@ -51,13 +54,16 @@ An internal error that prevents return of the repository-information returns `50
 
 The following call requests the list of repositories currently defined on the cluster:
 
+[source, curl]
 ----
-curl -v -X GET http://127.0.0.1:8091/_p/backup/api/v1/cluster/self \
+curl -v -X GET http://127.0.0.1:8097/api/v1/cluster/self \
 -u Administrator:password
 ----
 
 If successful, the call returns `200 OK`, and an object whose initial part may appear as follows:
 
+
+[source, json]
 ----
 {
   "name": "self",
@@ -122,6 +128,10 @@ If successful, the call returns `200 OK`, and an object whose initial part may a
               .
               .
               .
+      }
+    }
+  }
+}
 ----
 
 The cluster is thus shown to contain a single imported repository, no archived repositories, and a number of active repositories, two of which can be identified in the fragment shown.
@@ -129,6 +139,6 @@ The cluster is thus shown to contain a single imported repository, no archived r
 [#see-also]
 == See Also
 
-An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
-A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
-Information on using the Backup Service REST API to create a repository is provided in xref:rest-api:backup-create-repository.adoc[Create a Repository].
+* For an overview of the Backup Service, see xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
+* For a step-by-step guide to using Couchbase Server Web Console to configure and use the Backup Service, see xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
+* For information about using the Backup Service REST API to create a repository, see  xref:rest-api:backup-create-repository.adoc[Create a Repository].
diff --git a/modules/rest-api/pages/backup-get-plan-info.adoc b/modules/rest-api/pages/backup-get-plan-info.adoc
index 3023fa8909..c7e07d775d 100644
--- a/modules/rest-api/pages/backup-get-plan-info.adoc
+++ b/modules/rest-api/pages/backup-get-plan-info.adoc
@@ -1,5 +1,5 @@
-= Get Information on Plans
-:description: The Backup Service REST API allows information on plans to be retrieved.
+= Get Backup Plan Information
+:description: The Backup Service REST API lets you get information about backup plans.
 
 [abstract]
 {description}
@@ -7,47 +7,80 @@
 [#http-methods-and-uris]
 == HTTP Methods and URIs
 
+Get a list of all defined backup plans:
+
 ----
 GET /plan
+----
+
+Get detailed information about a specific backup plan:
 
-GET /plan/
 ----
+GET /plan/{PLAN_NAME}
+----
+
+.Path Parameters
+[cols="2,3,2"]
+|===
+|Name | Description | Schema
 
-[#description]
-== Description
+| `PLAN_NAME`
+| The name of a backup plan. 
+| String
 
-The `GET /plan` http method and URI return an array of plans, currently defined for the cluster.
-The `GET /plan/` http method and URI return information on a single, specified plan.
+|===
 
 [#curl-syntax]
 == Curl Syntax
 
 ----
-curl -X GET http://:8097/plan
-  -u :
+curl -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/plan
+  -u $USERNAME:$PASSWORD
 
-curl -X GET http://:8097/plan/
-  -u :
+curl -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT:8097/plan/$PLAN_NAME
+  -u $USERNAME:$PASSWORD
 ----
 
-The `` must be the name of a plan currently defined for the cluster.
-The `username` and `password` must identify an administrator with the Full Admin role.
+== Required Permissions
+
+Full Admin, Backup Full Admin, or Read-Only Admin roles.
+
+
 
 [#responses]
 == Responses
 
-If a specified `plan-id` does not exist, `404 Object Not Found` is returned, with an object such as the following: `{"status":404,"msg":"requested plan not found"}`.
 
-Failure to authenticate returns `401 Unauthorized`.
-An incorrectly specified URI returns `404 Object Not Found`.
+|===
+|Value | Description  
+
+| `200 OK` and JSON array containing plan information depending on the specific endpoint.
+| Successful call.
 
+| `400` 
+| Invalid parameter.
+
+| `401 Unauthorized`
+|  Authorization failure due to incorrect username or password.
+
+| `403 Forbidden`, plus a JSON message explaining the minimum permissions.
+| The provided username has insufficient privileges to call this method.
+
+| `404 Object Not found` and the message `{"status":404,"msg":"requested plan not found"}`
+| The plan in the endpoint URI does not exist.
+
+| `500 Could not retrieve the requested repository`
+| Error in Couchbase Server.
+
+|===
 
 [#examples]
 == Examples
 
-The following call returns an array, each of whose members is an object containing information for a plan currently defined for the cluster.
-Note that the output is piped to the https://stedolan.github.io/jq[jq^] command, to facilitate readability:
+The following call returns an array each of whose members is an object containing information for a plan currently defined for the cluster.
+The command pipes the output to the https://stedolan.github.io/jq[`jq`^] command to improve readability:
 
+[source, console]
 ----
 curl -v -X GET http://127.0.0.1:8097/api/v1/api/v1/plan \
 -u Administrator:password | jq '.'
@@ -55,6 +88,8 @@ curl -v -X GET http://127.0.0.1:8097/api/v1/api/v1/plan \
 
 If the call is successful, `200 OK` is returned, with an array the initial part of which may appear as follows:
 
+
+[source, json]
 ----
 [
   {
@@ -101,6 +136,10 @@ If the call is successful, `200 OK` is returned, with an array the initial part
           .
           .
           .
+      }
+    ]
+  }
+]
 ----
 
 Each object in the array contains information on the specified plan.
@@ -109,6 +148,7 @@ Each task is listed with an account of its type and schedule.
 
 The following call returns information specifically on the plan `testPlan2`:
 
+[source, console]
 ----
 curl -v -X GET http://127.0.0.1:8091/_p/backup/api/v1/plan/testPlan2 \
 -u Administrator:password | jq '.'
@@ -116,6 +156,7 @@ curl -v -X GET http://127.0.0.1:8091/_p/backup/api/v1/plan/testPlan2 \
 
 If the call is successful, `200 OK` is returned, with the following object:
 
+[source, json]
 ----
 {
   "name": "testPlan2",
@@ -150,7 +191,7 @@ If the call is successful, `200 OK` is returned, with the following object:
 }
 ----
 
-The object contains information on the specified plan.
+The object contains information about the specified plan.
 The information includes confirmation of the services for which data is backed up by the plan; and the tasks that are performed for the plan.
 Each task is listed with an account of its type and schedule.
 
@@ -158,7 +199,7 @@ Each task is listed with an account of its type and schedule.
 [#see-also]
 == See Also
 
-An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
-A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
-For information on using the Backup Service REST API to create and edit plans, see xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
-For information on deleting plans, see xref:rest-api:backup-delete-plan.adoc[Delete a Plan].
+* For an overview of the Backup Service, see xref:learn:services-and-indexes/services/backup-service.adoc[].
+* For a step-by-step guide to using Couchbase Server Web Console to configure and use the Backup Service, see xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[].
+* For Information about using the Backup Service REST API to create a plan, see  xref:rest-api:backup-create-and-edit-plans.adoc[].
+* For information about deleting plans, see xref:rest-api:backup-delete-plan.adoc[].
diff --git a/modules/rest-api/pages/backup-get-repository-info.adoc b/modules/rest-api/pages/backup-get-repository-info.adoc
index 4d3b5f94a1..525e69915f 100644
--- a/modules/rest-api/pages/backup-get-repository-info.adoc
+++ b/modules/rest-api/pages/backup-get-repository-info.adoc
@@ -1,71 +1,114 @@
-= Get Information on Repositories
-:description: The Backup Service REST API allows information to be retrieved on active, imported, or archived repositories.
+= Get Backup Repository Information
+:description: The Backup Service REST API lets you list and get information about the active, imported, and archived backup repositories.
 
 [abstract]
+
+== Description
+
 {description}
 
 [#http-methods-and-uris]
 == HTTP Methods and URIs
 
+
+List all backup repositories with a specific status:
+----
+GET /api/v1/cluster/self/repository/{REPO_STATUS}
 ----
-GET /cluster/self/repository/< "active" | "imported" | "archived" >
 
-GET /cluster/self/repository/< "active" | "imported" | "archived" >/
 
-GET /cluster/self/repository/< "active" | "imported" | "archived" >//info
+Get overview information about a specific backup repository:
+----
+GET /api/v1/cluster/self/repository/{REPO_STATUS}/{REPO_NAME}
 ----
 
-[#description]
-== Description
 
-The `GET /cluster/self/repository/< "active" | "imported" | "archived" >` http method and URI return an array, each of whose members is an object containing information on a repository.
-Information includes repository names, file or cluster paths, and plan details.
+Get detailed information about a specific backup repository including backup names and dates, buckets, items, and mutations:
+----
+GET /api/v1/cluster/self/repository/{REPO_STATUS}/{REPO_NAME}/info
+----
 
-The `GET /cluster/self/repository/< "active" | "imported" | "archived" >/` http method and URI return a single object, containing information on the repository whose name is specified by the `repository-id` path-parameter.
-Information includes repository names, file or cluster paths, and plan details.
+NOTE: These URIs are only available from the Backup Service port (8097 by default) on nodes running the Backup Service.
+
+.Path Parameters
+[cols="2,3,2"]
+|===
+|Name | Description | Schema
+
+|`REPO_STATUS`
+| The current status of the repository.
+a| One of the following:
+
+* `active`
+* `imported`
+* `archived` 
 
-The `GET /cluster/self/repository/< "active" | "imported" | "archived" >//info` http method and URI return a single object, containing information on the repository whose name is specified by the `repository-id` path-parameter.
-Information includes backup names and dates, buckets, items, and mutations.
+| `REPO_NAME`
+| The name of the backup repository
+| String
+
+|===
 
 [#curl-syntax]
 == Curl Syntax
 
 ----
-curl -X GET :8097/cluster/self/\
-repository/< "active" | "imported" | "archived" >
--u :
+curl -X GET $BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/cluster/self/\
+repository/$REPO_STATUS
+-u $USERNAME:$PASSWORD
 
 curl -X GET :8097/cluster/self/\
-repository/< "active" | "imported" | "archived" >/
--u :
+repository/$REPO_STATUS/$REPO_NAME
+-u $USERNAME:$PASSWORD
 
 curl -X GET :8097/cluster/self/\
-repository/< "active" | "imported" | "archived" >//info
--u :
+repository/$REPO_STATUS/$REPO_NAME/info
+-u $USERNAME:$PASSWORD
 ----
 
-The `repository-id` path-parameter must be the name of a repository.
-The `username` and `password` must identify an administrator with the Full Admin role.
+
+== Required Permissions
+
+Full Admin, Backup Full Admin, or Read-Only Admin roles.
+
 
 [#responses]
 == Responses
 
-Successful location of all repositories returns `200 OK`, and an array of repositories.
+|===
+|Value | Description  
+
+| `200 OK` and JSON array containing repository information depending on the specific endpoint.
+| Successful call.
+
+| `400`
+| Invalid parameter.
 
-Successful location of a specified repository returns `200 OK` and an object containing information on the repository.
-If the specified repository is not located, `404` is returned, with the following object: `{"status": 404, "msg": "no repositories found"}`.
+| `400 Object Not found`
+| The repository in the endpoint URI does not exist.
+
+| `401 Unauthorized`
+|  Authorization failure due to incorrect username or password.
+
+| `403 Forbidden`, plus a JSON message explaining the minimum permissions.
+| The provided username has insufficient privileges to call this method.
+
+| `404 Object Not Found`
+| Error in the URI path.
+
+| `500 Could not retrieve the requested repository`
+| Error in Couchbase Server.
+
+|===
 
-If an internal error causes the call the fail, `500` is returned; with the message `Could not retrieve the requested repository`.
-Failure to authenticate returns `401 Unauthorized`.
-An incorrectly specified URI returns `404 Object Not Found`.
-An incorrectly specified method returns `404 Object Not Found`, and returns the object `{"status":404,"msg":"requested plan not found"}`.
 
 [#examples]
 == Examples
 
-The following call returns information on all currently defined, active repositories,.
-Note that the output is piped to the https://stedolan.github.io/jq/[jq^] command, to facilitate readability.
+The following `curl` command returns information about all active repositories.
+The command pipes the output to the https://stedolan.github.io/jq/[`jq`^] command for readability.
 
+[source, console]
 ----
 curl -v -X GET \
 http://127.0.0.1:8097/api/v1/cluster/self/repository/active \
@@ -76,6 +119,7 @@ Successful execution returns a JSON array, each of whose members is an object co
 Information includes repository names, file or cluster paths, and plan details.
 The initial part of the potentially extensive output might appear as follows:
 
+[source, json]
 ----
 [
   {
@@ -113,6 +157,8 @@ The initial part of the potentially extensive output might appear as follows:
       .
       .
       .
+  }
+]
 ----
 
 Each object thus contains the `id` (name), `plan_name`, `state`, `repo` (unique identifier), and scheduled tasks for the repository.
@@ -120,6 +166,7 @@ It also contains an account of the repository's health, its creation time, and t
 
 The following call returns information on a specific, named, active repository:
 
+[source, console]
 ----
 curl -v -X GET \
 http://127.0.0.1:8091/_p/backup/api/v1/cluster/self/repository/active/restRepo \
@@ -128,6 +175,7 @@ http://127.0.0.1:8091/_p/backup/api/v1/cluster/self/repository/active/restRepo \
 
 If successful, the call returns the following object:
 
+[source, json]
 ----
 {
   "id": "restRepo",
@@ -169,6 +217,7 @@ The object thus contains information on the specified repository.
 
 The following call returns information including backup names and dates, buckets, items, and mutations; on an imported repository named `mergedRepo`:
 
+[source, console]
 ----
 curl -v -X GET http://127.0.0.1:8097/api/v1/cluster/self/repository/imported/mergedRepo/info \
 -u Administrator:password  | jq
@@ -176,6 +225,7 @@ curl -v -X GET http://127.0.0.1:8097/api/v1/cluster/self/repository/imported/mer
 
 If successful, the initial part of the potentially extensive output is as follows:
 
+[source, json]
 ----
 {
   "name": "7509894b-7138-40fe-917e-9581d298482c",
@@ -252,13 +302,16 @@ If successful, the initial part of the potentially extensive output is as follow
         .
         .
         .
+    }
+  ]
+}
 ----
 
 
 [#see-also]
 == See Also
 
-An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
-A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
-Information on using the Backup Service REST API to create a plan is provided in xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
-Information on using the Backup Service REST API to create a repository is provided in xref:rest-api:backup-create-repository.adoc[Create a Repository].
+* For an overview of the Backup Service, see xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
+* For a step-by-step guide to using Couchbase Server Web Console to configure and use the Backup Service, see xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
+* For Information about using the Backup Service REST API to create a plan, see  xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
+* For information about using the Backup Service REST API to create a repository, see xref:rest-api:backup-create-repository.adoc[Create a Repository].
diff --git a/modules/rest-api/pages/backup-get-task-info.adoc b/modules/rest-api/pages/backup-get-task-info.adoc
index 8ec21515da..5fffba9907 100644
--- a/modules/rest-api/pages/backup-get-task-info.adoc
+++ b/modules/rest-api/pages/backup-get-task-info.adoc
@@ -1,78 +1,142 @@
-= Get Information on Tasks
-:description: The Backup Service REST API allows information to be retrieved on the task history of an active, imported, or archived repository.
+= Get Backup Task History
+:description: The Backup Service REST API lets you retrieve the task history of an active, imported, or archived repository.
 
+[#description]
+== Description
 [abstract]
-{description}
+This HTTP method and URI return an array containing the entire task history for the repository specified by the `REPO_NAME` path-parameter.
+
+The optional `TASK_SUBSET_PARAMETERS` let you select a subset of the tasks that you want the method to return.
 
 [#http-methods-and-uris]
 == HTTP Methods and URIs
 
 ----
-GET /cluster/self/repository/< "active" | "imported" | "archived" >//taskHistory
+GET /api/v1/cluster/self/repository/{REPO_STATUS}/{REPO_NAME}/taskHistory
 
-GET /cluster/self/repository/< "active" | "imported" | "archived" >//taskHistory?
+GET /api/v1/cluster/self/repository/{REPO_STATUS}/{REPO_NAME}/taskHistory?{TASK_SUBSET_PARAMETERS}
 ----
 
-[#description]
-== Description
+NOTE: These URIs are only available from the Backup Service port (8097 by default) on nodes running the Backup Service.
 
-The `GET /cluster/self/repository/active//taskHistory` http method and URI return an array containing the entire task history for the repository specified by the `repository-name` path-parameter.
+.Path Parameters
+[cols="2,3,2"]
+|===
+|Name | Description | Schema
 
-The `GET /cluster/self/repository/active//taskHistory?` http method and URI return an array containing the task history for a subset of the tasks performed for the repository specified by the `repository-name` path-parameter.
+| `REPO_STATUS`
+| The status of the backup repository. 
+a| Must be one of:
 
-In each case, the `repository-name` can be that of an active, imported, or archived repository.
+* `active`
+* `imported`
+* `archived` 
 
-[#curl-syntax]
-== Curl Syntax
 
-----
-curl -X GET http://:8097/cluster/self\
-/repository/< "active" | "imported" | "archived" >/\
-/taskHistory
--u :
+| `REPO_NAME`
+| The name of the repository
+| String
+
+| `TASK_SUBSET_PARAMETERS`
+| One or more optional query parameters that filter the list of tasks this method returns.   
+| <>
+
+|===
 
-curl -X GET http://:8097/cluster/self\
-/repository/< "active" | "imported" | "archived" >/\
-/taskHistory?
--u :
+[[subset-spec]]
+=== Task Subset Parameter String
 
+You can filter the list of tasks this method returns using the optional task subset specification query string. 
+You can supply one or more of the following parameters:
+
+----
+first={DATE}&limit={COUNT}&taskName={TASK_NAME}
 ----
 
-A subset of tasks to be returned is optionally determined by the `task-subset-specification`, whose syntax is as follows:
+.Task Subset Query String Parameters
+[cols="2,3,2"]
+|===
+|Name | Description | Value 
+
+| `DATE`
+| Only returns tasks that started after the supplied date. 
+| A datetime string in https://www.rfc-editor.org/rfc/rfc3339[RFC-3339] format
+
+| `COUNT`
+| The number of most recent tasks to return.
+| Integer
+
+| `TASK_NAME`
+a| Only returns tasks whose name exactly matches `TASK_NAME` including case.
+Use this option to filter  when multiple tasks are writing to the same repository.
+| String
+
+|===
+
+[#curl-syntax]
+== Curl Syntax
 
 ----
-first=&limit=&taskName=
+curl -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/cluster/self\
+     /repository/$REPO_STATUS/$REPOSITORY_NAME/taskHistory \
+     -u $USERNAME:$PASSWORD
+
+curl -G -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/cluster/self\
+     /repository/$REPO_STATUS/$REPOSITORY_NAME/taskHistory
+     [-d 'first=$DATE'] [-d 'limit=$COUNT'] [-d 'taskName=$TASK_NAME']
+     -u $USERNAME:$PASSWORD
 ----
 
-The `date` specified as the value of the query parameter `first` is the earliest date for which tasks are included.
-The integer specified as the value of the query parameter `limit` is the maximum number of tasks to be returned.
-The string provided as the value of the optional query parameter `taskName` is the name of the single task to be returned.
+== Required Permissions
 
-The `username` and `password` must identify an administrator with the Full Admin role.
+Full Admin, Backup Full Admin, or Read-Only Admin roles.
 
 [#responses]
 == Responses
 
-Successful execution returns `200 OK`, and an array each of whose members is an object containing information on a task discharged for the repository.
-If an invalid parameter is specified, `400` is returned.
-If the specified repository cannot be found, `404 Object Not Found` is returned.
-If an internal error prevents successful execution, `500 Internal Server Error` is returned.
-Failure to authenticate returns `401 Unauthorized`.
-An incorrectly specified URI returns `404 Object Not Found`.
+|===
+|Value | Description  
+
+| `200 OK` and JSON array containing the tasks
+| Successful call.
+
+| `400`
+| Invalid parameter.
+
+| `400 Object Not found`
+| The repository in the endpoint URI does not exist.
+
+| `401 Unauthorized`
+|  Authorization failure due to incorrect username or password.
+
+| `403 Forbidden`, plus a JSON message explaining the minimum permissions.
+| The provided username has insufficient privileges to call this method.
+
+| `404 Object Not Found`
+| Error in the URI path.
+
+| `500 Internal Server Error`
+| Error in Couchbase Server.
+
+|===
 
 [#example]
-== Example
+== Examples
+
+NOTE: The following examples assume you are running the curl command from a node that is running the Backup Service.
 
 The following call returns the entire task history for the active repository `quarterHourBackups`:
 
+[source, console]
 ----
 curl -v -X GET http://127.0.0.1:8097/api/v1/cluster/self/\
-repository/active/quarterHourBackups/taskHistory \
--u Administrator:password
+     repository/active/quarterHourBackups/taskHistory \
+     -u Administrator:password
 ----
 
 If the call is successful, the first part of the potentially extensive output may appear as follows:
 
+[source, json]
 ----
 [
   {
@@ -132,14 +196,134 @@ If the call is successful, the first part of the potentially extensive output ma
                 .
                 .
                 .
+  }
+]
 ----
 
-The array thus includes objects for specific runs of the task `fifteenMinuteBackup`.
+The array includes objects for specific runs of the task `fifteenMinuteBackup`.
 Each object incudes the `start` and `end` time of the task; and lists specific `node_runs`, with details on buckets whose data was backed up.
 
+The following example demonstrates using the `first` and `limit` query parameters to limit the results to two tasks that started after 14:24:22 on May 5th, 2024 GMT.  
+
+[source, console]
+----
+curl -G -s -X GET http://127.0.0.1:8097/api/v1/cluster/self/repository/active/quarterHourBackups/taskHistory \
+     -d 'first=2024-05-06T14:24:22Z' 
+     -d 'limit=2' 
+     -u Administrator:password | jq
+----
+
+A successful call returns a task list resembling the following:
+
+[source, json]
+----
+[
+  {
+    "task_name": "fifteenMinuteBackup",
+    "status": "done",
+    "start": "2024-05-06T17:24:22.471826882Z",
+    "end": "2024-05-06T17:24:28.901488385Z",
+    "node_runs": [
+      {
+        "node_id": "1a41682a59f40d3932d2cf7b131a2312",
+        "status": "done",
+        "start": "2024-05-06T17:24:22.483698673Z",
+        "end": "2024-05-06T17:24:28.889650843Z",
+        "progress": 100,
+        "stats": {
+          "id": "36dfeb46-78b0-428a-b9d6-36b0169ac685",
+          "current_transfer": 1,
+          "total_transfers": 1,
+          "transfers": [
+            {
+              "description": "Backing up to 2024-05-06T17_24_22.886394673Z",
+              "stats": {
+                "started_at": 1715016262871131000,
+                "finished_at": 1715016268874403800,
+                "buckets": {
+                  "travel-sample": {
+                    "total_items": 63344,
+                    "total_vbuckets": 1024,
+                    "vbuckets_complete": 1024,
+                    "bytes_received": 28672,
+                    "failover_logs_received": 1024,
+                    "started_at": 1715016266774038500,
+                    "finished_at": 1715016268870321400,
+                    "complete": true
+                  }
+                },
+                "users": {},
+                "complete": true
+              },
+              "progress": 100,
+              "eta": "2024-05-06T17:24:28.878288801Z"
+            }
+          ],
+          "progress": 100,
+          "eta": "2024-05-06T17:24:28.878288801Z"
+        },
+        "error_code": 0
+      }
+    ],
+    "error_code": 0,
+    "type": "BACKUP"
+  },
+  {
+    "task_name": "fifteenMinuteBackup",
+    "status": "done",
+    "start": "2024-05-06T17:09:22.279129423Z",
+    "end": "2024-05-06T17:09:28.677706343Z",
+    "node_runs": [
+      {
+        "node_id": "1a41682a59f40d3932d2cf7b131a2312",
+        "status": "done",
+        "start": "2024-05-06T17:09:22.291632632Z",
+        "end": "2024-05-06T17:09:28.667370885Z",
+        "progress": 100,
+        "stats": {
+          "id": "7dabe789-0413-4ef2-b7d9-e942cab1da75",
+          "current_transfer": 1,
+          "total_transfers": 1,
+          "transfers": [
+            {
+              "description": "Backing up to 2024-05-06T17_09_22.690112298Z",
+              "stats": {
+                "started_at": 1715015362678973000,
+                "finished_at": 1715015368655166200,
+                "buckets": {
+                  "travel-sample": {
+                    "total_items": 63344,
+                    "total_vbuckets": 1024,
+                    "vbuckets_complete": 1024,
+                    "bytes_received": 28672,
+                    "failover_logs_received": 1024,
+                    "started_at": 1715015366548654800,
+                    "finished_at": 1715015368651093200,
+                    "complete": true
+                  }
+                },
+                "users": {},
+                "complete": true
+              },
+              "progress": 100,
+              "eta": "2024-05-06T17:09:28.658444968Z"
+            }
+          ],
+          "progress": 100,
+          "eta": "2024-05-06T17:09:28.658444968Z"
+        },
+        "error_code": 0
+      }
+    ],
+    "error_code": 0,
+    "type": "BACKUP"
+  }
+]
+----
+
 [#see-also]
 == See Also
 
-An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
-A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
-Information on using the Backup Service REST API to create a plan (and in so doing, define one or more tasks) is provided in xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
+* For a an overview of the Backup Service, see xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
+* For a step-by-step guide to configure and use the Backup Service using the Couchbase Server Web Console, see  xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
+* For information about using the Backup Service REST API to create a plan, see xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
diff --git a/modules/rest-api/pages/backup-manage-config.adoc b/modules/rest-api/pages/backup-manage-config.adoc
index d22c231920..456fdc898c 100644
--- a/modules/rest-api/pages/backup-manage-config.adoc
+++ b/modules/rest-api/pages/backup-manage-config.adoc
@@ -1,5 +1,6 @@
 = Manage Backup Configuration
-:description: The rotation period and size for Backup Service configuration data can be set and returned by means of the REST API.
+:description: This method lets you get and set the rotation size for Backup Service history.
+
 
 [abstract]
 {description}
@@ -7,64 +8,73 @@
 [#http-methods-and-uris]
 == HTTP Methods and URIs
 
+Get the current rotation configuration:
+
+----
+GET /api/v1/config
 ----
-POST /config
 
-PUT /config
+Apply a new configuration:
 
-GET /config
 ----
+POST /api/v1/config
+----
+
+// Commenting out because I can't get this to work. --Gary
+// Edit an existing configuration: 
+//----
+//PUT /config
+//----
+
+.POST Parameter
+[cols="2,3,2"]
+|===
+|Name | Description | Schema
+
+| `history_rotation_size`
+| The maximum size of the backup history can grow to before the Backup Service starts removing older history.
+| Integer value between 5 and 200
 
-[#description]
-== Description
+|===
 
-Used with the `POST` http method, the `/config` URI establishes, with `PUT` modifies, and with `GET` retrieves the rotation limits for Backup Service configuration data.
 
 [#curl-syntax]
 == Curl Syntax
 
 ----
-curl -X POST http://:8097/config
-  -u -u :
-  -d 
+curl -X GET http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/api/v1/config
+  -u $USERNAME:$PASSWORD
 
-curl -X PUT http://:8097/config
-  -u -u :
-  -d 
-
-curl -X GET http://:8097/config
-  -u -u :
-  -d 
+curl -X POST http://$BACKUP_SERVICE_NODE:$BACKUP_SERVICE_PORT/api/v1/config
+  -u $USERNAME:$PASSWORD
+  -d '{"history_rotation_size":$HISTORY_ROTATION_SIZE}'
 ----
 
-The `username` and `password` must be those of a user with the `Full Admin` role.
-The `rotation-settings` must be specified as a JSON payload.
-The settings are:
-
-* `historyRotationPeriod`.
-A number of days.
-The default value is 30, the minimum 1, the maximum 365.
-When this number of days has elapsed, the configuration file is rotated.
-
-* `historyRotationSize`.
-A number of megabytes.
-The default value is 50, the minimum 5, the maximum 200.
+== Required Permissions
 
-When this size is reached, the configuration file is rotated.
+To call this method via GET: Full Amin, Backup Admin, or Read-Only Admin.
 
-Note that the configuration file grows in size due to the progressive accumulation of task-history for the cluster.
-On rotation, a sequentially numbered copy of the current configuration file is made.
-The current configuration file is then deleted, and a new file is created when new data is written.
+To call this method via POST: Full Admin or Backup Admin.
 
 [#responses]
 == Responses
 
-For all three http methods, success returns `200 OK`.
-If an improper value is expressed, `400 Bad Request` is returned, with a message such as the following: `{"status":400,"msg":"rotation size has to be between 5 and 200"}`.
+|===
+|Value | Description  
+
+| `200 OK` and when calling via GET, a JSON object containing the current settings.
+| Successful call.
+
+| `400 Bad Request` plus the JSON message `{ "status": 400, "msg": "Rotation size has to be between 5 and 200"}`
+| Returned when trying to set the rotation size to an invalid value. 
+
+| `401 Unauthorized`
+|  Authorization failure due to incorrect username or password.
 
-Failure to authenticate returns `401 Unauthorized`.
-An internal error that prevents return or modification of the limits returns `500 Internal Server Error`.
+| `403 Forbidden`, plus a JSON message explaining the minimum permissions.
+| The provided username has insufficient privileges to call this method.
 
+|===
 
 [#examples]
 == Examples
@@ -72,21 +82,21 @@ An internal error that prevents return or modification of the limits returns `50
 The following call returns the current configuration limits:
 
 ----
-curl -v -X GET http://127.0.0.1:8091/_p/backup/api/v1/config \
+curl -v -X GET http://127.0.0.1:8097/api/v1/config \
 -u Administrator:password
 ----
 
 If successful, the call returns `200 OK`, and the following object:
 
 ----
-{"history_rotation_period":30,"history_rotation_size":50}
+{"history_rotation_size":50}
 ----
 
-The following call modifies both rotation period and size:
+The following call modifies the rotation size:
 
 ----
-curl -v -X POST http://127.0.0.1:8091/_p/backup/api/v1/config -u Administrator:password \
---data '{"history_rotation_period":32,"history_rotation_size":51}'
+curl -v -X POST http://127.0.0.1:8097/api/v1/config -u Administrator:password \
+-d '{"history_rotation_size":51}'
 ----
 
 Success returns `200 OK`.
@@ -94,5 +104,5 @@ Success returns `200 OK`.
 [#see-also]
 == See Also
 
-An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
-A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
+* For an overview of the Backup Service, see xref:learn:services-and-indexes/services/backup-service.adoc[].
+* For a step-by-step guide to using Couchbase Server Web Console to configure and use the Backup Service, see xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[].
diff --git a/modules/rest-api/pages/backup-pause-and-resume-tasks.adoc b/modules/rest-api/pages/backup-pause-and-resume-tasks.adoc
index b180313bb0..339783b33f 100644
--- a/modules/rest-api/pages/backup-pause-and-resume-tasks.adoc
+++ b/modules/rest-api/pages/backup-pause-and-resume-tasks.adoc
@@ -73,4 +73,4 @@ Again, success returns `200 OK`.
 An overview of the Backup Service is provided in xref:learn:services-and-indexes/services/backup-service.adoc[Backup Service].
 A step-by-step guide to using Couchbase Web Console to configure and use the Backup Service is provided in xref:manage:manage-backup-and-restore/manage-backup-and-restore.adoc[Manage Backup and Restore].
 Information on using the Backup Service REST API to create a plan (and in so doing, define one or more tasks) is provided in xref:rest-api:backup-create-and-edit-plans.adoc[Create and Edit Plans].
-To get information on currently defined tasks, see xref:rest-api:backup-get-task-info.adoc[Get Information on Tasks].
+To get information on currently defined tasks, see xref:rest-api:backup-get-task-info.adoc[Get Backup Task History].
diff --git a/modules/rest-api/pages/rest-get-cluster-tasks.adoc b/modules/rest-api/pages/rest-get-cluster-tasks.adoc
index 9c2ffd5f86..9a81eed13f 100644
--- a/modules/rest-api/pages/rest-get-cluster-tasks.adoc
+++ b/modules/rest-api/pages/rest-get-cluster-tasks.adoc
@@ -1,49 +1,67 @@
 = Getting Cluster Tasks
-:description: pass:q[A list of ongoing cluster tasks can be returned with the `GET /pools/default/tasks` HTTP method and URI.]
+:description: pass:q[You can list tasks running on the cluster using the `GET /pools/default/tasks` HTTP method and URI.]
 :page-topic-type: reference
+:page-toclevels: 3
 
 [abstract]
 {description}
-Additionally, a report on the last-completed rebalance can be returned with `GET /logs/rebalanceReport?reportID=`.
+In addition, a report on the last-completed rebalance can be returned with `GET /logs/rebalanceReport?reportID=REPORT_ID`.
 
 [#http-method-and-uri]
-== HTTP methods and URIs
+== HTTP Methods and URIs
 
 ----
+
 GET /pools/default/tasks
 
-GET /logs/rebalanceReport?reportID=
+GET /pools/default/tasks?taskId=TASK_ID
+
+GET /logs/rebalanceReport?reportID=REPORT_ID
 ----
 
 [#rest-get-cluster-tasks-description]
 == Description
 
-By means of `GET /pools/default/tasks`, ongoing cluster-tasks can be reported; with status, id, and additional information returned for each.
+Calling `GET /pools/default/tasks` lists tasks running on the cluster. 
+The list includes the task's ID, status, and other relevant informaatiom. 
+You can return information about sample bucket loading tasks by supplying its `taskId` as a parameter: `GET /pools/default/tasks?taskID=TASK_ID`.
 
-By means of `GET /logs/rebalanceReport?reportID=`, a report can be returned, providing information on a completed _rebalance_.
+Calling `GET /logs/rebalanceReport?reportID=REPORT_ID`, a report can be returned, providing information on a completed _rebalance_.
 The required `report-id` is provided in the object returned by `GET /pools/default/tasks`.
 
 [#curl-syntax]
 == Curl Syntax
 
 ----
-curl -v -X GET -u :
-  http://:8091/pools/default/tasks
+curl -s -u USER_NAME:PASSWORD -G \
+     http://NODE_NAME_OR_ADDRESS:PORT/pools/default/tasks \
+     -d "taskId=TASK_ID"
+
 
-curl -v -X GET -u :
-  http://:8091/logs/rebalanceReport?reportID=
+curl -s -u USER_NAME:PASSWORD -G
+     http://NODE_NAME_OR_ADDRESS:PORT/logs/rebalanceReport
+     -d "reportID=REPORT_ID"
 ----
 
-The required `report-id` is provided in the object returned by `GET /pools/default/tasks`, as the value of `lastReportURI`.
+The arguments shown in the syntax are:
+
+* *`USER_NAME`*: the username to use when connecting to Couchbase Server.
+* *`PASSWORD`*: the password for the user.
+* *`NODE_NAME_OR_ADDRESS`*: the name or IP address of a node in the cluster.
+* *`TASK_ID`*: the optional ID of a sample bucket loading task whose status you want ot view. 
+You can find the `taskId` for a sample bucket task from the return value of calls to either the `/sampleBuckets/install` or `/pools/default/tasks` 
+If you do not provide a task ID, the call returns all tasks on the cluster.
+* *`REPORT_ID`*: the required rebalance report ID.  
+You can find this ID from the rebalance task's `lastReportURI` field in the task list returned by calling `GET /pools/default/tasks` without parameters.
 
 [#responses]
 == Responses
 
-For `GET /pools/default/tasks`, success gives the response code `200 OK`, and returns an object containing information on the current status of ongoing tasks.
-See the examples provided below.
+A successful call to `GET /pools/default/tasks` returns the response code `200 OK` and an object containing  details about a specific task when you supply a `taskId` argument.
+If you do not supply the `taskId` argument,  it returns an object containing information about all tasks.
 
-For `GET /logs/rebalanceReport?reportID=`, success gives the response code `200 OK`, and returns an object that contains a report on the last-completed rebalance.
-Specifying a `report-id` that cannot be found gives `200 OK`, but returns an object signifying the error, as follows:
+A successful call to `GET /logs/rebalanceReport?reportID=REPORT_ID` returns response code `200 OK` and an object containing a report on the rebalance  whose `reportID` matches `REPORT_ID`.
+If you pass Couchbase Server a `reportID` that it cannot find, it returns the response code `200 OK` and an object containing the following error:
 
 ----
 {
@@ -51,7 +69,7 @@ Specifying a `report-id` that cannot be found gives `200 OK`, but returns an obj
 }
 ----
 
-Specifying a `report-id` of incorrect length gives `400 Bad Request`, and returns an object signifying the error, as follows:
+Passing a `reportID` of incorrect length returns `400 Bad Request` and an object with the following error message:
 
 ----
 {
@@ -61,25 +79,25 @@ Specifying a `report-id` of incorrect length gives `400 Bad Request`, and return
 }
 ----
 
-For both calls, failure to authenticate gives `401 Unauthorized`.
+For both endpoints, failure to authenticate returns the response `401 Unauthorized`.
 
 [#examples]
 == Examples: Retrieving Cluster Tasks
 
-The following examples show output returned in accordance with _whether_ tasks are in progress; and if so, _which_.
+The following examples demonstrate using the `/pools/default/tasks` and `/logs/rebalanceReport` endpoints.
 
 [#no-tasks-underway]
-=== No Tasks Underway
+=== No Tasks Running
+
+When the cluster has no tasks running, calling the `/pools/default/tasks` method returns just the most recent rebalance task:
 
-When the cluster has no tasks underway, the method verifies this.
-Here, the output is piped to the https://stedolan.github.io/jq[jq] tool, to enhance readability.
 
 ----
 curl -u Administrator:password -v -X GET \
-http://10.143.194.101:8091/pools/default/tasks | jq '.'
+     http://10.143.194.101:8091/pools/default/tasks | jq '.'
 ----
 
-Output is as follows:
+Output is piped through the https://stedolan.github.io/jq[`jq`^] tool to enhance readability and appears as the following:
 
 ----
 [
@@ -96,42 +114,144 @@ Output is as follows:
 
 The default output indicates that no rebalance is underway.
 A `statusId` and task `type` are provided.
-The `lastReportURI` specifies the location of the _report_ of the last rebalance to have been performed.
+The `lastReportURI` specifies the location of the report of the last rebalance to have been performed.
 See the xref:rebalance-reference:rebalance-reference.adoc[Rebalance Reference], for further information.
 
 [#adding-a-bucket]
-=== Adding a Bucket
+[#loading-a-sample-bucket]
+=== Monitoring Sample Bucket Loading Tasks
+
+You can monitor the tasks Couchbase Server starts to load one or more samples buckets through the `/pools/default/tasks` method. 
+The following example starts loading two sample buckets by calling  the `/sampleBuckets/install` endpoint:
+
+[source, console]
+----
+include::rest-api:example$install-sample-bucket.sh[]
+----
 
-When a bucket is being added (in this case, the sample bucket `beer-sample`), status can be returned by entering the method in the standard way, specifying the IP address of the cluster, or `localhost`, as appropriate:
+The response lists the tasks the method call started to load the sample buckets:
 
+[source, json]
+----
+include::rest-api:example$sample-bucket-install-response.json[]
+----
+
+You can also list the tasks using the `/pools/default/tasks` method:
+
+[source, console]
 ----
 curl -u Administrator:password -v -X GET \
-http://localhost:8091/pools/default/tasks | jq '.'
+     http://localhost:8091/pools/default/tasks | jq '.'
+----
+
+The output of this call returns the active sample bucket task which is loading the `beer-sample` bucket:
+
+[source, json]
+----
+include::rest-api:example$sample-bucket-tasks.json[]
+----
+
+NOTE: The task for loading the `travel-sample` does not appear in the previous output because it is not currently running. 
+Only one sample bucket task runs at a time.
+
+To monitor a sample bucket task specifically, you can call `/pools/default/tasks` with the `taskId` found in either the response from `/sampleBuckets/install` or the list of tasks from `/pools/default/tasks`.
+The following example demonstrates getting the status of the `travel-sample` bucket task:
+
+[source, console]
+----
+ curl -s -u Administrator:password  -G \ 
+      http://localhost:8091/pools/default/tasks \
+      -d "taskId=ed6cd88e-d704-4f91-8dd3-543e03669024" | jq '.'
+----
+
+The result shows that the task is queued:
+
+[source, json]
+----
+[
+  {
+    "task_id": "f1c55abe-926a-415d-bf36-f3d99c27016f",
+    "status": "queued",
+    "type": "loadingSampleBucket",
+    "bucket": "travel-sample",
+    "bucket_uuid": "not_present"
+  }
+]
+----
+
+Viewing the status of the `beer-sample` task shows that it's running:
+
+[source, json]
+----
+[
+  {
+    "task_id": "e2a5e251-faee-415d-b146-d0891123075b",
+    "status": "running",
+    "type": "loadingSampleBucket",
+    "bucket": "beer-sample",
+    "bucket_uuid": "not_present"
+  }
+]
 ----
 
-Output is as follows:
+After the `beer-sample` tasks finishes, Coucbbase Server starts the `travel-sample` task.
+Calling `/pools/default/tasks` again without a `taskId` shows the `travel-sample` task as well as a number of indexing tasks, as shown in the following result:
 
+[source, json]
 ----
 [
   {
-    "statusId": "1f05320a7b359e1672ffc8b7ee69a8b5",
+    "statusId": "e36859e44fb7c226c180b4610313f074",
     "type": "rebalance",
+    "subtype": "rebalance",
     "status": "notRunning",
     "statusIsStale": false,
     "masterRequestTimedOut": false,
-    "lastReportURI": "/logs/rebalanceReport?reportID=0c41dba637a8971b1aa921a89e851d83"
+    "lastReportURI": "/logs/rebalanceReport?reportID=af5b2ac96af031218bae6e3411b007b5"
+  },
+  {
+    "type": "global_indexes",
+    "recommendedRefreshPeriod": 2,
+    "status": "running",
+    "bucket": "travel-sample",
+    "index": "def_inventory_route_sourceairport",
+    "id": 5548520957444133000,
+    "progress": 0,
+    "statusIsStale": false
   },
   {
+    "type": "global_indexes",
+    "recommendedRefreshPeriod": 2,
+    "status": "running",
+    "bucket": "travel-sample",
+    "index": "def_inventory_route_schedule_utc",
+    "id": 7890207154426784000,
+    "progress": 19,
+    "statusIsStale": false
+  },
+    . 
+    . 
+    . 
+  {
+    "type": "global_indexes",
+    "recommendedRefreshPeriod": 2,
+    "status": "running",
+    "bucket": "travel-sample",
+    "index": "def_inventory_airline_primary",
+    "id": 2691871325047162400,
+    "progress": 12,
+    "statusIsStale": false
+  },
+  {
+    "task_id": "f1c55abe-926a-415d-bf36-f3d99c27016f",
     "status": "running",
     "type": "loadingSampleBucket",
-    "bucket": "beer-sample",
-    "pid": "<0.24849.21>"
+    "bucket": "travel-sample",
+    "bucket_uuid": "not_present"
   }
 ]
 ----
 
-The output indicates that no rebalance is underway, but that the `loadingSampleBucket` operation is ongoing.
-
 [#compacting-a-bucket]
 === Compacting a Bucket
 
@@ -171,32 +291,13 @@ The output indicates that the `beer-sample` bucket is being compacted.
 Progress is reported in terms of `changesDone`, `totalChanges`, and a `progress` figure that is a percentage of total completion.
 A URI is provided for cancelling compaction, if required.
 
-[#loading-a-sample-bucket]
-=== Loading a Sample Bucket
 
-If a sample bucket is loaded, task status can be returned, by entering the method in the standard way, specifying the IP address of the cluster, or `localhost`, as appropriate:
-
-----
-curl -X GET http://localhost:8091/pools/default/tasks -u Administrator:password | jq '.'
-----
-
-The output includes the following:
-
-----
-{
-  "status": "running",
-  "type": "loadingSampleBucket",
-  "bucket": "travel-sample",
-  "pid": "<0.11528.51>"
-}
-----
 
-This indicates that the `travel-sample` bucket is being loaded, and shows the process id for the task.
 
 [#performing-xdcr]
 === Performing XDCR
 
-If an instance of XDCR is underway, its task status can be returned, by entering the method in the standard way, specifying the IP address of the cluster, or `localhost`, as appropriate:
+You can monitor an ongoing XDCR task using the `/pools/default/tasks` method:
 
 ----
 curl -X GET http://localhost:8091/pools/default/tasks -u Administrator:password | jq '.'
diff --git a/modules/rest-api/pages/rest-sample-buckets.adoc b/modules/rest-api/pages/rest-sample-buckets.adoc
index fb8a4ba0bf..7bf16099a2 100644
--- a/modules/rest-api/pages/rest-sample-buckets.adoc
+++ b/modules/rest-api/pages/rest-sample-buckets.adoc
@@ -1,18 +1,16 @@
 = Managing Sample Buckets
-:description: pass:q[Couchbase Server allows _sample buckets_ to be installed. \
-These contain data ready to be used for development and testing.]
+:description: pass:q[Couchbase Server offers several sample buckets you can install for development and testing.]
 :page-topic-type: reference
 
 [abstract]
-{description}
+
 
 == Description
 
-Couchbase Server allows _sample buckets_ to be installed, and then used for development and testing.
+{description}
 
-== HTTP methods and URIs
 
-The following methods and URIs respectively allow the names of the currently available sample buckets to be retrieved, and one or more to be installed on the cluster.
+== HTTP Methods and URIs
 
 ----
 GET /sampleBuckets
@@ -20,40 +18,49 @@ GET /sampleBuckets
 POST /sampleBuckets/install
 ----
 
+== Description
+
+Gets the list of sample buckets available in Couchbase Server and installs one or more sample buckets.
+
 == Curl Syntax
 
+[source,console]
 ----
-curl -X GET -u [username]:[password]
-  http://[node-name-or-ip-address]:8091/sampleBuckets
+curl -X GET -u USERNAME:PASSWORD
+  http://NODE_NAME_OR_IP:PORT/sampleBuckets
 
-curl -X POST -u [username]:[password]
-  http://[node-name-or-ip-address]:8091/sampleBuckets/install
-  -d '[ ,  ]'
+curl -X POST -u USERNAME:PASSWORD
+  http://NODE_NAME_OR_IP:PORT/sampleBuckets/install
+  -d '[ "SAMPLE_BUCKET_NAME", ... ]'
 ----
 
-The `node-name-or-ip-address` can be that of any node in the cluster.
-Each `bucketname` must be the name of an available sample bucket, specified as a string.
+* *`NODENAME_OR_IP`*: a node in the cluster.
+* *`"SAMPLE_BUCKET_NAME"`*: the name of a sample bucket, specified as a string. You can provide a comma separated list of sample buckets to install.
 
 == Responses
 
-If the GET is successful, `200 OK` is given, and an object describing available sample buckets is returned.
-If the POST is successful, `200 OK` is given, and an empty message-list is returned.
-In either case, an incorrectly specified bucket-name or URI gives `404 Object Not Found`; and failure to authenticate gives `401 Unauthorized`.
+If the GET succeeds, it returns `200 OK` and a JSON object describing available sample buckets.
+
+If the POST succeeds, it returns `200 OK` and a JSON array containing information about the tasks Couchbase Server started to load the buckets.
 
-Incorrectly using the POST to install one or more sample buckets that are already installed returns a list containing a message for each error; such as `["Sample bucket travel-sample is already loaded.","Sample bucket beer-sample is already loaded."]`.
+For either endpoint, an incorrectly specified bucket-name or URI gives `404 Object Not Found`; and failure to authenticate gives `401 Unauthorized`.
+
+Calling the POST endpoint to install one or more sample buckets that are already installed returns a list containing a message for each error; such as `["Sample bucket travel-sample is already loaded.","Sample bucket beer-sample is already loaded."]`.
 
 == Examples
 
 The following example retrieves a list of the currently available sample buckets.
 Note that the output is piped to the https://https://stedolan.github.io/jq/[jq] program, to facilitate readability.
 
+[source,console]
 ----
 curl -X GET -u Administrator:password \
 http://10.143.194.101:8091/sampleBuckets | jq
 ----
 
-If successful, the call returns output such as the following:
+If successful, the call returns output similar to the following:
 
+[source,json]
 ----
 [
   {
@@ -74,23 +81,31 @@ If successful, the call returns output such as the following:
 ]
 ----
 
-Each available bucket is listed, along with its current install-status (`true` or `false`).
-The memory quota required for each bucket, in Bytes, is also stated: note that this minimum must be available even though the actual sample bucket might not, with its default content, require it all.
+The output lists the available sample buckets and whether it's installed and the memory required to install the bucket.
+
+NOTE: the `quotaNeeded` value is the minimum that Couchbase Server must have available. 
+The sample bucket might not consume this entire value when you install it.
 
 The following example installs the `travel-sample` and `beer-sample` sample buckets:
 
+[source,console]
 ----
-curl -X POST -u Administrator:password \
-http://10.143.194.101:8091/sampleBuckets/install \
--d '["travel-sample", "beer-sample"]'
+include::rest-api:example$install-sample-bucket.sh[]
 ----
 
-If successful, the call returns an empty list.
+If successful, the call returns a JSON array containing the tasks that Couchbase Server started to load the buckets:
 
-== See Also
+[source,json]
+----
+include::rest-api:example$sample-bucket-install-response.json[]
+----
 
-Information on _deleting_ buckets is provided in xref:rest-api:rest-bucket-delete.adoc[Deleting Buckets].
+You can monitor the status of these tasks by calling the `/pools/default/tasks` REST API endpoint. 
+See xref:rest-get-cluster-tasks.adoc[] for more information.
 
-Information on installing sample buckets with the CLI is provided in xref:cli:cbdocloader-tool.adoc[cbdocloader].
+== See Also
 
-Information on installing sample buckets with Couchbase Web Console is provided in xref:manage:manage-settings/install-sample-buckets.adoc[Sample Buckets].
+* For an overview of sample buckets, see xref:manage:manage-settings/install-sample-buckets.adoc[].
+* xref:rest-api:rest-bucket-delete.adoc[] explains deleting buckets using the REST-API.
+* xref:cli:cbdocloader-tool.adoc[cbdocloader] explains how to install sample buckets using the command line interface.
+* xref:manage:manage-settings/install-sample-buckets.adoc[] explains how to load sample buckets using the Couchbase Server Web Console.
diff --git a/modules/rest-api/partials/rest-backup-service-table.adoc b/modules/rest-api/partials/rest-backup-service-table.adoc
index 29f9e4758f..f31161b9cd 100644
--- a/modules/rest-api/partials/rest-backup-service-table.adoc
+++ b/modules/rest-api/partials/rest-backup-service-table.adoc
@@ -37,15 +37,15 @@
 
 | `GET`
 | `/cluster/self/repository/<'active'|'archived'|'imported'>`
-| xref:rest-api:backup-get-repository-info.adoc[Get Information on Repositories]
+| xref:rest-api:backup-get-repository-info.adoc[Get Backup Repository Information]
 
 | `GET`
 | `/cluster/self/repository/active/`
-| xref:rest-api:backup-get-repository-info.adoc[Get Information on Repositories]
+| xref:rest-api:backup-get-repository-info.adoc[Get Backup Repository Information]
 
 | `GET`
 | `/cluster/self/repository/<'active'|'archived'|'imported'>//info`
-| xref:rest-api:backup-get-repository-info.adoc[Get Information on Repositories]
+| xref:rest-api:backup-get-repository-info.adoc[Get Backup Repository Information]
 
 | `POST`
 | `/cluster/self/repository/active/`
@@ -104,11 +104,11 @@
 
 | `GET`
 | `/cluster/plan`
-| xref:rest-api:backup-get-plan-info.adoc[Get Information on Plans]
+| xref:rest-api:backup-get-plan-info.adoc[Get Backup Plan Information]
 
 | `GET`
 | `/cluster/plan/`
-| xref:rest-api:backup-get-plan-info.adoc[Get Information on Plans]
+| xref:rest-api:backup-get-plan-info.adoc[Get Backup Plan Information]
 
 | `POST`
 | `/cluster/plan/`
@@ -132,11 +132,11 @@
 
 | `GET`
 | `/cluster/self/repository/<'active'|'archived'|'imported'>//taskHistory`
-| xref:rest-api:backup-get-task-info.adoc[Get Information on Tasks]
+| xref:rest-api:backup-get-task-info.adoc[Get Backup Task History]
 
 | `GET`
 | `/cluster/self/repository/<'active'|'archived'|'imported'>//taskHistory?`
-| xref:rest-api:backup-get-task-info.adoc[Get Information on Tasks]
+| xref:rest-api:backup-get-task-info.adoc[Get Backup Task History]
 
 |===
 
diff --git a/modules/settings/examples/node-level-settings.jsonc b/modules/settings/examples/node-level-settings.jsonc
deleted file mode 100644
index 4660e03da2..0000000000
--- a/modules/settings/examples/node-level-settings.jsonc
+++ /dev/null
@@ -1,38 +0,0 @@
-{
-  "atrcollection": "",
-  "auto-prepare": false,
-  "cleanupclientattempts": true,
-  "cleanuplostattempts": true,
-  "cleanupwindow": "1m0s",
-  "completed": {
-    "aborted": null,
-    "threshold": 1000
-  },
-  "completed-limit": 4000,
-  "completed-threshold": 1000,
-  "controls": false,
-  "cpuprofile": "",
-  "debug": false,
-  "functions-limit": 16384,
-  "keep-alive-length": 16384,
-  "loglevel": "INFO",
-  "max-index-api": 4,
-  "max-parallelism": 1,
-  "memory-quota": 0,
-  "memprofile": "",
-  "mutexprofile": false,
-  "n1ql-feat-ctrl": 76,
-  "numatrs": 1024,
-  "pipeline-batch": 16,
-  "pipeline-cap": 512,
-  "plus-servicers": 16,
-  "prepared-limit": 16384,
-  "pretty": false,
-  "profile": "off",
-  "request-size-cap": 67108864,
-  "scan-cap": 512,
-  "servicers": 4,
-  "timeout": 0,
-  "txtimeout": "0s",
-  "use-cbo": true
-}
\ No newline at end of file
diff --git a/modules/settings/examples/node-level-settings.sh b/modules/settings/examples/node-level-settings.sh
deleted file mode 100644
index c582ed1bd0..0000000000
--- a/modules/settings/examples/node-level-settings.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/sh
-
-# tag::curl[]
-curl http://localhost:8093/admin/settings -u user:pword
-# end::curl[]
\ No newline at end of file
diff --git a/modules/settings/examples/param-names.n1ql b/modules/settings/examples/param-names.n1ql
deleted file mode 100644
index 5f8fccaa43..0000000000
--- a/modules/settings/examples/param-names.n1ql
+++ /dev/null
@@ -1,9 +0,0 @@
-/* tag::arguments[] */
-\SET -@country "France";
-\SET -$altitude 500;
-/* end::arguments[] */
-
-/* tag::statement[] */
-SELECT COUNT(*) FROM airport
-WHERE country = $country AND geo.alt > @altitude;
-/* end::statement[] */
\ No newline at end of file
diff --git a/modules/settings/examples/param-numbers.n1ql b/modules/settings/examples/param-numbers.n1ql
deleted file mode 100644
index 83121fa65c..0000000000
--- a/modules/settings/examples/param-numbers.n1ql
+++ /dev/null
@@ -1,8 +0,0 @@
-/* tag::arguments[] */
-\SET -args ["France", 500];
-/* end::arguments[] */
-
-/* tag::statement[] */
-SELECT COUNT(*) FROM airport
-WHERE country = $1 AND geo.alt > @2;
-/* end::statement[] */
\ No newline at end of file
diff --git a/modules/settings/examples/param-positions.n1ql b/modules/settings/examples/param-positions.n1ql
deleted file mode 100644
index 697b37d6b1..0000000000
--- a/modules/settings/examples/param-positions.n1ql
+++ /dev/null
@@ -1,8 +0,0 @@
-/* tag::arguments[] */
-\SET -args ["France", 500];
-/* end::arguments[] */
-
-/* tag::statement[] */
-SELECT COUNT(*) FROM airport
-WHERE country = ? AND geo.alt > ?;
-/* end::statement[] */
\ No newline at end of file
diff --git a/modules/settings/examples/save-node-level-settings.sh b/modules/settings/examples/save-node-level-settings.sh
deleted file mode 100644
index ddbac30341..0000000000
--- a/modules/settings/examples/save-node-level-settings.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/sh
-
-# tag::curl[]
-curl http://localhost:8093/admin/settings -u user:pword -o ./query_settings.json
-# end::curl[]
\ No newline at end of file
diff --git a/modules/settings/pages/query-settings.adoc b/modules/settings/pages/query-settings.adoc
deleted file mode 100644
index 7a364e6be4..0000000000
--- a/modules/settings/pages/query-settings.adoc
+++ /dev/null
@@ -1,607 +0,0 @@
-= Settings and Parameters
-:description: You can configure the Query service using cluster-level query settings, node-level query settings, and request-level query parameters.
-:page-aliases: manage:manage-settings/query-settings
-:tabs:
-
-// External cross-references
-:rest-cluster-query-settings: xref:rest-api:rest-cluster-query-settings.adoc
-:general-settings-query-settings: xref:manage:manage-settings/general-settings.adoc#query-settings
-:couchbase-cli-setting-query: xref:cli:cbcli/couchbase-cli-setting-query.adoc
-:n1ql-rest-api-admin: xref:n1ql:n1ql-rest-api/admin.adoc
-:n1ql-rest-api-index: xref:n1ql:n1ql-rest-api/index.adoc
-:cbq-shell: xref:tools:cbq-shell.adoc
-:rest-intro: xref:rest-api:rest-intro.adoc
-:query-preferences: xref:tools:query-workbench.adoc#query-preferences
-
-// Pass through HTML table styles for this page
-
-ifdef::basebackend-html[]
-++++
-
-++++
-endif::[]
-
-[abstract]
-{description}
-
-== Overview
-
-There are three ways of configuring the Query service:
-
-* Specify cluster-level settings for all nodes running the Query service in the cluster.
-* Specify node-level settings for a single node running the Query service.
-* Specify parameters for individual requests.
-
-You must set and use cluster-level query settings, node-level query settings, and request-level parameters in different ways.
-
-.Comparison of Query Settings and Parameters
-[cols="216s,145,145,145,230"]
-|===
-| | Set Per | Set By | Set On | Set Via
-
-| Cluster-level query settings ^[<>]^
-| Cluster
-| System administrator
-| Server side
-| The CLI, cURL statements, or the UI
-
-| Node-level query settings ^[<>]^
-| Service Node
-| System administrator
-| Server side
-| cURL statements
-
-| Request-level parameters
-| Request (statement)
-| Each user
-| Client side
-| `cbq` shell, cURL statements, client programming, or the UI
-|===
-
-[#service-level]
-NOTE: Cluster-level settings and node-level settings are collectively referred to as [def]_service-level settings_.
-
-[#query-setting-levels-and-equivalents]
-== How Setting Levels Interact
-
-Some query settings are cluster-level, node-level, or request-level only.
-Other query settings apply to more than one level with slightly different names.
-
-[#cluster-level-and-node-level]
-=== How Cluster-Level Settings Affect Node-Level Settings
-
-If a cluster-level setting has an equivalent node-level setting, then changing the cluster-level setting overwrites the node-level setting for all Query nodes in the cluster.
-
-You can change a node-level setting for a single node to be different to the equivalent cluster-level setting.
-Changing the node-level setting does not affect the equivalent cluster-level setting.
-However, you should note that the node-level setting may be overwritten by subsequent changes at the cluster-level.
-In particular, specifying query settings via the CLI or the UI makes changes at the cluster-level.
-
-[#node-level-and-request-level]
-=== How Node-Level Settings Affect Request-Level Parameters
-
-If a request-level parameter has an equivalent node-level setting, the node-level setting _usually_ acts as the default for the request-level parameter, as described in the tables below.
-Setting a request-level parameter overrides the equivalent node-level setting.
-
-Furthermore, for numeric values, if a request-level parameter has an equivalent node-level setting, the node-level setting dictates the upper-bound value of the request-level parameter.
-For example, if the node-level `timeout` is set to 500, then the request-level parameter cannot be set to 501 or any value higher.
-
-== All Query Settings
-
-.Single-Level Settings -- Not Equivalent
-[.fixed-width, cols="1,1,1"]
-|===
-| Cluster-Level Only Settings | Node-Level Only Settings | Request-Level Only Parameters
-
-a| [%hardbreaks]
-<>
-<>
-<>
-
-a| [%hardbreaks]
-<>
-<>
-<>
-<>
-<>
-<>
-<