Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
////
IMPORTANT NOTE
==============
This file has been generated from https://github.com/elastic/elasticsearch-net/tree/6.x/src/Tests/Tests/ClientConcepts/HighLevel/Indexing/Indexing.doc.cs.
This file has been generated from https://github.com/elastic/elasticsearch-net/tree/6.x/src/Tests/Tests/ClientConcepts/HighLevel/Indexing/IndexingDocuments.doc.cs.
If you wish to submit a PR for any spelling mistakes, typos or grammatical errors for this file,
please modify the original csharp file found at the link and submit the PR with that change. Thanks!
////
Expand Down Expand Up @@ -205,12 +205,7 @@ client.BulkAll(people, b => b
})
.RetryDocumentPredicate((item, person) => <3>
{
if (item.Error.Index == "even-index"
&& person.FirstName == "Martijn")
{
return true;
}
return false;
return item.Error.Index == "even-index" && person.FirstName == "Martijn";
})
.DroppedDocumentCallback((item, person) => <4>
{
Expand Down
2 changes: 2 additions & 0 deletions docs/client-concepts/high-level/indexing/pipelines.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ please modify the original csharp file found at the link and submit the PR with

An ingest pipeline is a series of processors that are to be executed in the same order as they are declared.

Let's work with the following POCOs

[source,csharp]
----
public class Person
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ public async Task SingleDocument()
* If you need to set additional parameters when indexing you can use the fluent or object initializer syntax.
* This will allow you finer control over the indexing of single documents.
*/
public async Task SingleDocumentWithParameters()
public void SingleDocumentWithParameters()
{
var person = new Person
{
Expand Down Expand Up @@ -154,7 +154,7 @@ public async Task BulkIndexDocuments()
*
* The helper will also lazily enumerate an `IEnumerable<T>` collection, allowing you to index a large number of documents easily.
*/
public async Task BulkDocumentsWithObservableHelper()
public void BulkDocumentsWithObservableHelper()
{
// hide
var people = new []
Expand Down Expand Up @@ -203,7 +203,7 @@ public async Task BulkDocumentsWithObservableHelper()
* 2. `RetryDocumentPredicate` enables fine control on deciding if a document that failed to be indexed should be retried.
* 3. `DroppedDocumentCallback` in the event a document is not indexed, even after retrying, this delegate is called.
*/
public async Task AdvancedBulkIndexing()
public void AdvancedBulkIndexing()
{
//hide
var people = new[] { new Person() };
Expand All @@ -221,12 +221,7 @@ public async Task AdvancedBulkIndexing()
})
.RetryDocumentPredicate((item, person) => //<3> decide if a document should be retried in the event of a failure
{
if (item.Error.Index == "even-index"
&& person.FirstName == "Martijn")
{
return true;
}
return false;
return item.Error.Index == "even-index" && person.FirstName == "Martijn";
})
.DroppedDocumentCallback((item, person) => //<4> if a document cannot be indexed this delegate is called
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ public class IngestNodes : DocumentationTestBase
*
* The simplest way to achieve this is to create a dedicated "indexing" client instance, and use it for indexing requests.
*/
public async Task CustomClient()
public void CustomClient()
{
var pool = new StaticConnectionPool(new [] //<1> list of ingest nodes
{
Expand All @@ -45,7 +45,7 @@ public async Task CustomClient()
* filter out the nodes that have ingest capabilities. This allows you to customise the cluster and not have to reconfigure
* the client.
*/
public async Task SniffingConnectionPool()
public void SniffingConnectionPool()
{
var pool = new SniffingConnectionPool(new [] //<1> list of cluster nodes
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@
namespace Tests.ClientConcepts.HighLevel.Caching
{
/**[[pipelines]]
*=== Ingest Pipelines
*
* An ingest pipeline is a series of processors that are to be executed in the same order as they are declared.
*=== Ingest Pipelines
*
* An ingest pipeline is a series of processors that are to be executed in the same order as they are declared.
*
* Let's work with the following POCOs
*/
public class IngestPipelines : DocumentationTestBase
{
Expand Down Expand Up @@ -46,7 +48,7 @@ public class GeoIp
* We could achieve this requirement by creating a custom mapping and creating an ingest pipeline.
* The Person type can then be used as-is, without making any changes.
*/
public async Task IngestionPipeline()
public void IngestionPipeline()
{
client.CreateIndex("people", c => c
.Mappings(ms => ms
Expand Down Expand Up @@ -95,7 +97,7 @@ public async Task IngestionPipeline()
*
* For large bulk requests, it could be prudent to increase the default indexing timeout to avoid exceptions.
*/
public async Task IncreasingTimeouts()
public void IncreasingTimeouts()
{
client.Bulk(b => b
.Index("people")
Expand Down