Skip to content

Commit

Permalink
Update Documentation Feature Flags [1.4.0.Beta1]
Browse files Browse the repository at this point in the history
  • Loading branch information
s1monw committed Oct 1, 2014
1 parent 47f7b27 commit 1f25669
Show file tree
Hide file tree
Showing 34 changed files with 48 additions and 48 deletions.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-keep-types-tokenfilter]]
=== Keep Types Token Filter

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

A token filter of type `keep_types` that only keeps tokens with a token type
contained in a predefined set.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/api-conventions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ specified to expand to all indices.
+
If `none` is specified then wildcard expansion will be disabled and if `all`
is specified, wildcard expressions will expand to all indices (this is equivalent
to specifying `open,closed`). coming[1.4.0.Beta1]
to specifying `open,closed`). added[1.4.0.Beta1]

The defaults settings for the above parameters depend on the api being used.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/docs/get.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ will fail.
[float]
[[generated-fields]]
=== Generated fields
coming[1.4.0.Beta1]
added[1.4.0.Beta1]

If no refresh occurred between indexing and refresh, GET will access the transaction log to fetch the document. However, some fields are generated only when indexing.
If you try to access a field that is only generated when indexing, you will get an exception (default). You can choose to ignore field that are generated if the transaction log is accessed by setting `ignore_errors_on_generated_fields=true`.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/docs/multi-get.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ curl 'localhost:9200/_mget' -d '{
[float]
=== Generated fields

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

See <<generated-fields>> for fields are generated only when indexing.

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/docs/multi-termvectors.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Multi termvectors API allows to get multiple termvectors at once. The
documents from which to retrieve the term vectors are specified by an index,
type and id. But the documents could also be artificially provided coming[1.4.0.Beta1].
type and id. But the documents could also be artificially provided added[1.4.0.Beta1].
The response includes a `docs`
array with all the fetched termvectors, each element having the structure
provided by the <<docs-termvectors,termvectors>>
Expand Down Expand Up @@ -92,7 +92,7 @@ curl 'localhost:9200/testidx/test/_mtermvectors' -d '{
}'
--------------------------------------------------

Additionally coming[1.4.0.Beta1], just like for the <<docs-termvectors,termvectors>>
Additionally added[1.4.0.Beta1], just like for the <<docs-termvectors,termvectors>>
API, term vectors could be generated for user provided documents. The syntax
is similar to the <<search-percolate,percolator>> API. The mapping used is
determined by `_index` and `_type`.
Expand Down
10 changes: 5 additions & 5 deletions docs/reference/docs/termvectors.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Returns information and statistics on terms in the fields of a particular
document. The document could be stored in the index or artificially provided
by the user coming[1.4.0.Beta1]. Note that for documents stored in the index, this
by the user added[1.4.0.Beta1]. Note that for documents stored in the index, this
is a near realtime API as the term vectors are not available until the next
refresh.

Expand All @@ -22,7 +22,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?fields=text,...'

or by adding the requested fields in the request body (see
example below). Fields can also be specified with wildcards
in similar way to the <<query-dsl-multi-match-query,multi match query>> coming[1.4.0.Beta1].
in similar way to the <<query-dsl-multi-match-query,multi match query>> added[1.4.0.Beta1].

[float]
=== Return values
Expand All @@ -43,7 +43,7 @@ If the requested information wasn't stored in the index, it will be
computed on the fly if possible. Additionally, term vectors could be computed
for documents not even existing in the index, but instead provided by the user.

coming[1.4.0.Beta1,The ability to computed term vectors on the fly as well as support for artificial documents is only available from 1.4.0 onwards (see below example 2 and 3 respectively)]
added[1.4.0.Beta1,The ability to computed term vectors on the fly as well as support for artificial documents is only available from 1.4.0 onwards (see below example 2 and 3 respectively)]

[WARNING]
======
Expand Down Expand Up @@ -230,7 +230,7 @@ Response:
--------------------------------------------------

[float]
=== Example 2 coming[1.4.0.Beta1]
=== Example 2 added[1.4.0.Beta1]

Term vectors which are not explicitly stored in the index are automatically
computed on the fly. The following request returns all information and statistics for the
Expand All @@ -249,7 +249,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true' -d '{
--------------------------------------------------

[float]
=== Example 3 coming[1.4.0.Beta1]
=== Example 3 added[1.4.0.Beta1]

Additionally, term vectors can also be generated for artificial documents,
that is for documents not present in the index. The syntax is similar to the
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/docs/update.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
}
}'
--------------------------------------------------
coming[1.4.0.Beta1]
added[1.4.0.Beta1]

If the document does not exist you may want your update script to
run anyway in order to initialize the document contents using
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/index-modules/fielddata.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ example, can be set to `5m` for a 5 minute expiry.
[[circuit-breaker]]
=== Circuit Breaker

coming[1.4.0.Beta1,Prior to 1.4.0 there was only a single circuit breaker for fielddata]
added[1.4.0.Beta1,Prior to 1.4.0 there was only a single circuit breaker for fielddata]

Elasticsearch contains multiple circuit breakers used to prevent operations from
causing an OutOfMemoryError. Each breaker specifies a limit for how much memory
Expand Down Expand Up @@ -69,7 +69,7 @@ parameters:
[[request-circuit-breaker]]
==== Request circuit breaker

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

The request circuit breaker allows Elasticsearch to prevent per-request data
structures (for example, memory used for calculating aggregations during a
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index-modules/query-cache.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[index-modules-shard-query-cache]]
== Shard query cache

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

When a search request is run against an index or against many indices, each
involved shard executes the search locally and returns its local results to
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/indices/aliases.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ the same index. The filter can be defined using Query DSL and is applied
to all Search, Count, Delete By Query and More Like This operations with
this alias.

coming[1.4.0.Beta1,Fields referred to in alias filters must exist in the mappings of the index/indices pointed to by the alias]
added[1.4.0.Beta1,Fields referred to in alias filters must exist in the mappings of the index/indices pointed to by the alias]

To create a filtered alias, first we need to ensure that the fields already
exist in the mapping:
Expand Down Expand Up @@ -312,7 +312,7 @@ Possible options:

The rest endpoint is: `/{index}/_alias/{alias}`.

coming[1.4.0.Beta1,The API will always include an `aliases` section, even if there aren't any aliases. Previous versions would not return the `aliases` section]
added[1.4.0.Beta1,The API will always include an `aliases` section, even if there aren't any aliases. Previous versions would not return the `aliases` section]

WARNING: For future versions of Elasticsearch, the default <<multi-index>> options will error if a requested index is unavailable. This is to bring
this API in line with the other indices GET APIs
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/clearcache.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ $ curl -XPOST 'http://localhost:9200/twitter/_cache/clear'
--------------------------------------------------

The API, by default, will clear all caches. Specific caches can be cleaned
explicitly by setting `filter`, `fielddata`, `query_cache` coming[1.4.0.Beta1],
explicitly by setting `filter`, `fielddata`, `query_cache` added[1.4.0.Beta1],
or `id_cache` to `true`.

All caches relating to a specific field(s) can also be cleared by
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/create-index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ curl -XPUT localhost:9200/test -d '{
[float]
=== Creation Date

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

When an index is created, a timestamp is stored in the index metadata for the creation date. By
default this it is automatically generated but it can also be specified using the
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/flush.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The flush API accepts the following request parameters:
`wait_if_ongoing`:: If set to `true` the flush operation will block until the
flush can be executed if another flush operation is already executing.
The default is `false` and will cause an exception to be thrown on
the shard level if another flush operation is already running. coming[1.4.0.Beta1]
the shard level if another flush operation is already running. added[1.4.0.Beta1]

`full`:: If set to `true` a new index writer is created and settings that have
been changed related to the index writer will be refreshed. Note: if a full flush
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/get-mapping.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ curl -XGET 'http://localhost:9200/_all/_mapping/tweet,book'
If you want to get mappings of all indices and types then the following
two examples are equivalent:

coming[1.4.0.Beta1,The API will always include a `mappings` section, even if there aren't any mappings. Previous versions would not return the `mappings` section]
added[1.4.0.Beta1,The API will always include a `mappings` section, even if there aren't any mappings. Previous versions would not return the `mappings` section]

[source,js]
--------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ specified as well in the URI. Those stats can be any of:
`fielddata`:: Fielddata statistics.
`flush`:: Flush statistics.
`merge`:: Merge statistics.
`query_cache`:: <<index-modules-shard-query-cache,Shard query cache>> statistics. coming[1.4.0.Beta1]
`query_cache`:: <<index-modules-shard-query-cache,Shard query cache>> statistics. added[1.4.0.Beta1]
`refresh`:: Refresh statistics.
`suggest`:: Suggest statistics.
`warmer`:: Warmer statistics.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/update-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ curl -XPOST 'localhost:9200/myindex/_open'
[[codec-bloom-load]]
=== Bloom filters

coming[1.4.0.Beta1,Bloom filters will no longer be loaded into memory at search time by default]
added[1.4.0.Beta1,Bloom filters will no longer be loaded into memory at search time by default]

Up to version 1.3, Elasticsearch used to generate bloom filters for the `_uid`
field at indexing time and to load them at search time in order to speed-up
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/indices/warmers.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ curl -XPUT localhost:9200/_template/template_1 -d '
}'
--------------------------------------------------

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

On the same level as `types` and `source`, the `query_cache` flag is supported
to enable query caching for the warmed search request. If not specified, it will
Expand Down Expand Up @@ -142,7 +142,7 @@ where

Instead of `_warmer` you can also use the plural `_warmers`.

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

The `query_cache` parameter can be used to enable query caching for
the search request. If not specified, it will use the index level configuration
Expand Down Expand Up @@ -182,7 +182,7 @@ Getting a warmer for specific index (or alias, or several indices) based
on its name. The provided name can be a simple wildcard expression or
omitted to get all warmers.

coming[1.4.0.Beta1,The API will always include a `warmers` section, even if there aren't any warmers. Previous versions would not return the `warmers` section]
added[1.4.0.Beta1,The API will always include a `warmers` section, even if there aren't any warmers. Previous versions would not return the `warmers` section]

Some examples:

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/mapping/dynamic-mapping.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ root and inner object types:
[float]
=== Unmapped fields in queries

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

Queries and filters can refer to fields which don't exist in a mapping, except
when registering a new <<search-percolate,percolator query>> or when creating
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/modules/discovery/zen.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ to 30 seconds and can be changed dynamically through the
[[no-master-block]]
==== No master block

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

For a node to be fully operational, it must have an active master. The `discovery.zen.no_master_block` settings controls
what operations should be rejected when there is no active master.
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/modules/network.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,10 @@ share the following allowed settings:
|=======================================================================
|Setting |Description
|`network.tcp.no_delay` |Enable or disable tcp no delay setting.
Defaults to `true`. coming[1.4.0.Beta1,Can be set to `default` to not be set at all.]
Defaults to `true`. added[1.4.0.Beta1,Can be set to `default` to not be set at all.]

|`network.tcp.keep_alive` |Enable or disable tcp keep alive. Defaults
to `true`. coming[1.4.0.Beta1,Can be set to `default` to not be set at all].
to `true`. added[1.4.0.Beta1,Can be set to `default` to not be set at all].

|`network.tcp.reuse_address` |Should an address be reused or not.
Defaults to `true` on non-windows machines.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/query-dsl/filters/range-filter.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The `range` filter accepts the following parameters:
`lte`:: Less-than or equal to
`lt`:: Less-than

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ First, each document is scored by the defined functions. The parameter
`max`:: maximum score is used
`min`:: minimum score is used

Because scores can be on different scales (for example, between 0 and 1 for decay functions but arbitrary for `field_value_factor`) and also because sometimes a different impact of functions on the score is desirable, the score of each function can be adjusted with a user defined `weight` (coming[1.4.0.Beta1]). The `weight` can be defined per function in the `functions` array (example above) and is multiplied with the score computed by the respective function.
Because scores can be on different scales (for example, between 0 and 1 for decay functions but arbitrary for `field_value_factor`) and also because sometimes a different impact of functions on the score is desirable, the score of each function can be adjusted with a user defined `weight` (added[1.4.0.Beta1]). The `weight` can be defined per function in the `functions` array (example above) and is multiplied with the score computed by the respective function.
If weight is given without any other function declaration, `weight` acts as a function that simply returns the `weight`.

The new score can be restricted to not exceed a certain limit by setting
Expand Down Expand Up @@ -135,7 +135,7 @@ you wish to inhibit this, set `"boost_mode": "replace"`

===== Weight

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

The `weight` score allows you to multiply the score by the provided
`weight`. This can sometimes be desired since boost value set on
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/query-dsl/queries/range-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The `range` query accepts the following parameters:
`lt`:: Less-than
`boost`:: Sets the boost value of the query, defaults to `1.0`

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/search/aggregations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ define fixed number of multiple buckets, and others dynamically create the bucke
[float]
=== Caching heavy aggregations

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

Frequently used aggregations (e.g. for display on the home page of a website)
can be cached for faster responses. These cached results are the same results
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-bucket-children-aggregation]]
=== Children Aggregation

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

A special single bucket aggregation that enables aggregating from buckets on parent document types to buckets on child documents.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-bucket-filters-aggregation]]
=== Filters Aggregation

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

Defines a multi bucket aggregations where each bucket is associated with a
filter. Each bucket will collect all documents that match its associated
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ Per default, the assumption is that the documents in the bucket are also contain


===== Chi square
coming[1.4.0.Beta1]
added[1.4.0.Beta1]

Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2 can be used as significance score by adding the parameter

Expand All @@ -317,7 +317,7 @@ Chi square behaves like mutual information and can be configured with the same p


===== google normalized distance
coming[1.4.0.Beta1]
added[1.4.0.Beta1]

Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ on high-cardinality fields as this will kill both your CPU since terms need to b

==== Calculating Document Count Error

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as
a whole which represents the maximum potential document count for a term which did not make it into the final list of
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-metrics-scripted-metric-aggregation]]
=== Scripted Metric Aggregation

coming[1.4.0.Beta1]
added[1.4.0.Beta1]

A metric aggregation that executes using scripts to provide a metric output.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/search/count.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ query.
|default_operator |The default operator to be used, can be `AND` or
`OR`. Defaults to `OR`.

|coming[1.4.0.Beta1] terminate_after |The maximum count for each shard, upon
|added[1.4.0.Beta1] terminate_after |The maximum count for each shard, upon
reaching which the query execution will terminate early.
If set, the response will have a boolean field `terminated_early` to
indicate whether the query execution has actually terminated_early.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/search/percolate.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ in the percolate api.
Field referred to in a percolator query must *already* exist in the mapping
assocated with the index used for percolation.
coming[1.4.0.Beta1,Applies to indices created in 1.4.0 or later]
added[1.4.0.Beta1,Applies to indices created in 1.4.0 or later]
There are two ways to make sure that a field mapping exist:
* Add or update a mapping via the <<indices-create-index,create index>> or
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/search/request-body.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -70,13 +70,13 @@ And here is a sample response:

`query_cache`::

coming[1.4.0.Beta1] Set to `true` or `false` to enable or disable the caching
added[1.4.0.Beta1] Set to `true` or `false` to enable or disable the caching
of search results for requests where `?search_type=count`, ie
aggregations and suggestions. See <<index-modules-shard-query-cache>>.

`terminate_after`::

coming[1.4.0.Beta1] The maximum number of documents to collect for each shard,
added[1.4.0.Beta1] The maximum number of documents to collect for each shard,
upon reaching which the query execution will terminate early. If set, the
response will have a boolean field `terminated_early` to indicate whether
the query execution has actually terminated_early. Defaults to no
Expand Down

0 comments on commit 1f25669

Please sign in to comment.