Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Parallel & Batch Ingestion #12457

Open
chishui opened this issue Feb 26, 2024 · 42 comments
Open

[RFC] Parallel & Batch Ingestion #12457

chishui opened this issue Feb 26, 2024 · 42 comments
Labels
enhancement Enhancement or improvement to existing feature or request ingest-pipeline RFC Issues requesting major changes

Comments

@chishui
Copy link
Contributor

chishui commented Feb 26, 2024

Is your feature request related to a problem? Please describe

Problem Statements

Today, users can utilize bulk API to ingest multiple documents in a single request. All documents from this request are handled by one ingest node and on this node, if there's any ingest pipeline configured, documents are processed by pipeline one at a time in a sequential order (ref). The ingest pipeline is constituted by a collection of processors and processor is the computing unit of a pipeline. Most of the processors are pretty light weighted such as append, uppercase, lowercase, and to process multiple documents one after another or to process them in parallel would make no observable difference. But for time-consuming processors such as neural search processors, which by their nature, require more time to compute, being able to run them in parallel could save user some valuable ingest time. Apart from ingestion time, processors like neural search, can benefit from processing batch documents together as it can reduce the requests to remote ML services via batch APIs to maximally avoid hitting rate limit restriction. (Feature request: opensearch-project/ml-commons#1840, rate limit example from OpenAI: https://platform.openai.com/docs/guides/rate-limits)

Due to the lack of parallel ingestion and batch ingestion capabilities in ingest flow, we propose below solution to address them.

Describe the solution you'd like

Proposed Features

1. Batch Ingestion

An ingest pipeline is constructed by a list of processors and a single document could flow through each processor one by one before it can be stored into index. Currently, both pipeline and processor can only handle one document each time and even if with bulk API, documents are iterated and handled in sequential order. As shown in figure 1, to ingest doc1, it would firstly flow through ingest pipeline 1, then through pipeline 2. Then, the next document would go through both pipeline.

ingest-Page-1

To support batch processing of documents, we'll add a batchExecute API in ingest pipeline and processors which take multiple documents as input parameters. We will provide a default implementation in Processor interface to iteratively call existingexecute API to process document one by one so that most of the processors don't need to make change and only if there's necessity for them to batch process documents (e.g. text embedding processor), they can have their own implementation, otherwise, even receiving documents altogether, they default to process them one by one.

To batch process documents, user need to use bulk API. We'll add two optional parameters for bulk API for user to enable batch feature and set batch size. Based on maximum_batch_size value, documents are split into batches.

Since in bulk API, different documents could be ingested to different indexes, indexes could use the same pipelines but in different order, e.g. index “movies” uses pipeline P1 as default pipeline, P2 as final pipeline; index “musics” uses P2 as default pipeline and P1 as final pipeline. To avoid over-complexity of handling cross indexes batching (topology sorting), we would batch documents in index level.

2. Parallel Ingestion

Apart from batch ingestion, we also propose to have parallel ingestion to accompany with batch ingestion to boost the ingestion performance. When user enables parallel ingestion, based on batch size, documents from bulk API will be split into batches, then, batches are processed in parallel with threads managed by thread pool. Although limiting the maximum concurrency of parallel ingestion, thread pool can help us protect host resources to not be exhausted by batch ingestion threads.

ingest-Page-2

Ingest flow logic change

Current logic of the ingestion flow of documents can be shown from the pseudo code below:

for (document in documents) {  
    for (pipeline in pipelines) {  
        for (processor in pipeline.processors) {  
            document = processor.execute(document)  
        }  
    }  
}

We'll change the flow to logic shown below if the pipeline has enable the batch option.

if (enabledBatch) {
    batches = calculateBatches(documents);
    for (batch in batches) {
        for (pipeline in pipelines) {  
            for (processor in pipeline.processors) {  
                documents = processor.batchExecute(documents)  
            }  
        }
    }
} else if (enabledParallelBatch) {
    batches = calculateBatches(documents);
    for (batch in batches) {
        threadpool.execute(()-> {
            for (pipeline in pipelines) {  
                for (processor in pipeline.processors) {  
                    documents = processor.batchExecute(documents)  
                }  
            }
        });
    }
} else {
    // fallback to exsiting ingestion logic
}

Update to Bulk API

We propose new parameters to bulk API, all of them are optional.

Parameter Type Description
batch_ingestion_option String Configure whether to enable batch ingestion. It has three options: none, enable and parallel. By default, it's none. When set it to enable, batch ingestion is enabled, and batches are processed in sequential order. When set it to parallel, batch ingestion is enabled and batches are processes in parallel.
maximum_batch_size Integer The batched document size. Only work when batch ingestion option is set to enable or parallel. It's 1 by default.

3. Split and Redistribute Bulk API

Users tend to use bulk API to ingest many documents which can be very time consuming sometimes. In order to achieve lower ingestion time, they have to use multiple clients to make multiple bulk requests with smaller document size so that the requests can be distributed to different ingest nodes. To offload the burden from user side, we can support the split and redistribute work from server side and help distribute the ingest load more evenly.
Note: although brought up here, we think it's better to discuss this topic in a separate RFC doc which will be published later.

Related component

Indexing:Performance

Describe alternatives you've considered

No response

Additional context

No response

@chishui chishui added enhancement Enhancement or improvement to existing feature or request untriaged labels Feb 26, 2024
@peternied peternied added RFC Issues requesting major changes Indexing Indexing, Bulk Indexing and anything related to indexing and removed untriaged labels Feb 28, 2024
@peternied
Copy link
Member

peternied commented Feb 28, 2024

[Triage - attendees 1 2 3 4 5]
@chishui Thanks for creating this RFC, it looks like this could be related to [1] [2]

@chishui
Copy link
Contributor Author

chishui commented Feb 29, 2024

@peternied Yes, it looks like the proposed feature 3 in this RFC has very similar idea with the streaming API especially the coordinator part to load balancing the ingest load. For feature 3, it just tries to reuse the bulk API.

Feature 1 and 2 are different from streaming API as they focus on parallel and batch ingestion on a single node which would happen post streaming API or feature 3.

@msfroh
Copy link
Collaborator

msfroh commented Feb 29, 2024

@dbwiddis, @joshpalis -- you may be interested in this, as you've been thinking about parallel execution for search pipelines. For ingest pipelines, the use-case is a little bit more "natural", because we already do parallel execution of _bulk request (at least across shards).

@chishui, can you confirm where exactly the parallel/batch execution would run? A bulk request is received on one node (that serves as coordinator for the request), then the underlying DocWriteRequests get fanned out to the shards. Does this logic run on the coordinator or on the individual shards? I can never remember where IngestService acts.

@chishui
Copy link
Contributor Author

chishui commented Mar 1, 2024

@msfroh, the "parallel/batch execution" would be run on the ingest pipeline side. The DocWriteRequests are first processed by ingest pipeline and its processors on a single ingest node, then the processed documents are fanned out to shards to be indexed. To answer your question, the logic would be run on the coordinator.

@chishui
Copy link
Contributor Author

chishui commented Mar 5, 2024

Additional information about parallel ingestion:

Performance:

Light-weighted processors - no improvement

We benchmarked the performance on some light weighted processors (lowercase + append) with current solution and parallelized batch solution, we don't see improvement on either latency or throughput which is aligned with our expectation that they are already very fast and parallelization wouldn't help and could bring some additional overhead.

ML processors - already in async

ML processors are the processors doing heavy lifting work, but they actually put the predict logic in a thread (code) which brings the ingestion of that document to async.

Reasons to have parallel ingestion

  1. A general solution: The parallel ingestion proposed here does the parallelization on document level, any time-consuming processors either existing today or introduced later can benefit from the parallelization directly without needing to make any changes.
  2. Maximum concurrency: Today, if processors makes their logic async, then only itself and the following processors will be run in a separate thread, all previous processors are still run in a same thread synchronously. Parallel ingestion can make the whole ingestion flow of a document in parallel to achieve maximum concurrency.
  3. Give user controls: It provides users flexibility to control concurrency level through batch size or user can even disable parallel ingestion through request parameter.
  4. Less dev efforts and resource usage if other processors want to achieve concurrency: Today, if some processor wants to achieve concurrency, they have to implement their own concurrency logic and they may also need to create their own thread-pool. It's not necessary as for a single document, processor has to be run one by one and causes wasting of resources and leads to overhead when thread switching.

Reasons not to have parallel ingestion

  1. There is no urgent need or immediate gain.

@model-collapse
Copy link

Scenario for batch processor in neural search document ingestion:
Since OpenSearch 2.7, ml-commons released its remote connector, allowing opensearch to connect with remote inference endpoint. However, ml-commons can take a list of strings as input but only supports to invoke the inference API on each input text one by one. The pipeline is like follows:
pipeline1
Intuitively, to enable the batch API of many 3rdparty LLM inference provider such as openAI and cohere, we can let ml-commons pass thru the list of strings as "a batch" to the API. Like this:
pipeline2
However, this kind of approach cannot fully leverage the GPU computation power because of two reasons: 1) The batch size is sticked with how many fields are being picked by the processor, but in fact each API have their suggested batch size such as 32 or 128. 2) In deep learning for NLP, text in a batch should have similar lengths in order to obtain highest GPU efficiency, but intuitively we will regard the text from different fields will have diverse length.
The best option is to implement a "batched" processor and recompose the "batches" by collecting texts from the same field. See following:
folding

Alternative Approach
There is one approach called "Dynamic Batching" which holds flushable queues in ml-commons. Each queues will gather the text input from the requests to ml-commons with the similar lengths. When timeout or the queue is full, the queue is flushed and the batch API of inference service is invoked. The con of this approach is that ml-commons will have quite big memory consumption to hold the queues, and timeout queue's implementation is more risky (dead locks, blocking calls) than batched processors.

Why we need the batch API?
The computation model of GPU is using block-wise SIMD (single instruction with multiple data). In AI, inferencing model by stacking input tensors together (as a batch) will effectively increase the GPU utilization. This approach is a more economic choice than using single request API.

@gaobinlong
Copy link
Contributor

@reta , could you also help to take a look at this RFC, thanks!

@reta
Copy link
Collaborator

reta commented Mar 12, 2024

Thanks @gaobinlong

@reta , could you also help to take a look at this RFC, thanks!

@msfroh @model-collapse @chishui I think the idea of enhancing ingest processors with batch processing is sound in general but it may have unintended consequences, due to complexity of bulk APIs in particular:

  • for example, bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results (by and large, bulk API has to provide some guarantees on document processing)
  • also, picking up the parallelism and batching becomes a nightmare (in my opinion), just today picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder

Making the ingestion API streaming based (apologies again for bribing for #3000) is fundamentally a different approach to ingestion - we would be able to vary the ingestion based on how fast the documents could be ingested at this moment of time, without introducing the complexity of batch / parallelism management.

@nknize I think you mind be eager to chime in here :)

@model-collapse
Copy link

Thanks @gaobinlong

@reta , could you also help to take a look at this RFC, thanks!

@msfroh @model-collapse @chishui I think the idea of enhancing ingest processors with batch processing is sound in general but it may have unintended consequences, due to complexity of bulk APIs in particular:

  • for example, bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results (by and large, bulk API has to provide some guarantees on document processing)
  • also, picking up the parallelism and batching becomes a nightmare (in my opinion), just today picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder

Making the ingestion API streaming based (apologies again for bribing for #3000) is fundamentally a different approach to ingestion - we would be able to vary the ingestion based on how fast the documents could be ingested at this moment of time, without introducing the complexity of batch / parallelism management.

@nknize I think you mind be eager to chime in here :)

Thanks for the comment. For machine learning inference, making use of batched inference API will significantly increase the GPU utilization and reduce the ingestion time. Thus batch is very important thing. You pointed out that "picking the right batch for bulk is very difficult, but adding yet more parallelization / internal batching would make it much harder". Can you elaborate more on that and give your suggestions on how to make ingestion faster?

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

@reta thanks for the feedbacks

bulk support scripts and upserts, and combination of those ... changing the ingestion sequence could lead to very surprising results

The proposal only targets the ingest pipeline & its processor part, it won't touch the indexing part. Even documents are processed in a batch manner, these things are still ensured:

  1. for a single document, it'll be processed by processors sequentially in the same order as the processor order defined in pipeline.
  2. Only when all documents in a bulk request have been processed by ingest pipeline, they are dispatched to be indexed on shards which is the same with current logic.

Either the action is index or update, upsert or script, they would be processed by ingest pipeline in the same way. I don't see the proposal will cause "changing the ingestion sequence", please let me know if I miss a piece of the puzzle.

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

Due to the aforementioned reasons about "parallel ingestion", we won't have immediate gain from delivering the feature, we have decided to deprioritize the “parallel ingestion” part of this RFC and mainly focus on the "batch ingestion".

@reta
Copy link
Collaborator

reta commented Mar 13, 2024

I don't see the proposal will cause "changing the ingestion sequence", please let me know if I miss a piece of the puzzle.

@chishui The parallelization (which is mentioned in this proposal) naturally changes the order which documents are being ingested, does it make sense? I think your last comment is the reflection of that, thank you.

Can you elaborate more on that and give your suggestions on how to make ingestion faster?

@model-collapse the problem with batching (at least how it is implemented currently in OS and what we've seen so far with bulk API) is that choosing the right batch size is difficult, taking into account that there are circuit breakers in place that try to estimate the heap usage etc. (as of the moment of ingestion) and may reject the request sporadically.

@chishui
Copy link
Contributor Author

chishui commented Mar 13, 2024

@reta in ingest flow when documents are processed by ingest pipeline, could one document depend on another? Even for today, text_embedding and sparse_encoding processors have their inference logic run in a thread which makes the document ingestion run in parallel, right? https://github.com/opensearch-project/ml-commons/blob/020207ecd6322fed424d5d54c897be74623db103/plugin/src/main/java/org/opensearch/ml/task/MLPredictTaskRunner.java#L194

@reta
Copy link
Collaborator

reta commented Mar 13, 2024

@reta in ingest flow when documents are processed by ingest pipeline, could one document depend on another?

@chishui yes, in general documents could depend on each other (just think about an example of the documents that are ingested out of any CDC or message broker, where the documents are being constructed as a sequence of changes).

Even for today, text_embedding and sparse_encoding processors have their inference logic run in a thread which makes the document ingestion run in parallel, right? https://github.com/opensearch-project/ml-commons/blob/020207ecd6322fed424d5d54c897be74623db103/plugin/src/main/java/org/opensearch/ml/task/MLPredictTaskRunner.java#L194

This is purely plugin specific logic

@gaobinlong
Copy link
Contributor

@chishui yes, in general documents could depend on each other (just think about an example of the documents that are ingested out of any CDC or message broker, where the documents are being constructed as a sequence of changes).

In my understanding, in terms of the execution of pipeline, each document in a bulk runs independently, no ingest processor can access other in-flight documents in the same bulk request, so in the process of executing pipelines, maybe a document cannot depend on another? And subsequently, for the processing of indexing(call lucene api to write), we have the write thread_pool, each document is processed in parallel, so the indexing order in a bulk cannot be guaranteed, the client side needs to ensure the indexing order. @reta, correct me if something is wrong, thank you!

@gaobinlong
Copy link
Contributor

gaobinlong commented Mar 14, 2024

I think executing pipelines run before the indexing process, firstly, we use a single transport thread to execute pipelines for all the documents in a bulk request, and then use the write thread_pool to process the new generated documents in parallel, so it seems that when executing pipelines for the documents, the execution order doesn't matter.

@reta
Copy link
Collaborator

reta commented Mar 14, 2024

Thanks @gaobinlong

In my understanding, in terms of the execution of pipeline, each document in a bulk runs independently, no ingest processor can access other in-flight documents in the same bulk request, so in the process of executing pipelines, maybe a document cannot depend on another?

The documents could logically depend on each other (I am not referring to any sharing that may happen in ingest processor). Since we are talking about bulk ingestion, where document could be indexed / updated / deleted, we certainly don't want to the deletes to be "visible" before documents are indexed.

I think executing pipelines run before the indexing process, firstly, we use a single transport thread to execute pipelines for all the documents in a bulk request, and then use the write thread_pool to process the new generated documents in parallel, so it seems that when executing pipelines for the documents, the execution order doesn't matter.

This part is not clear to me: AFAIK we offload processing of bulk requests (batches) to thread pool, not individual documents. Could you please point out where we parallelize the ingestion of the individual documents in the batch? Thank you

@gaobinlong
Copy link
Contributor

The documents could logically depend on each other (I am not referring to any sharing that may happen in ingest processor). Since we are talking about bulk ingestion, where document could be indexed / updated / deleted, we certainly don't want to the deletes to be "visible" before documents are indexed.

Yeah, you're correct, but for this RFC, it only focuses on the execution of ingest pipeline which only performs on the coordinate node, just the pre-processing part, not the indexing part, the indexing operations will not happen before the execution of ingest pipeline completes for all the documents in a bulk request.

This part is not clear to me: AFAIK we offload processing of bulk requests (batches) to thread pool, not individual documents. Could you please point out where we parallelize the ingestion of the individual documents in the batch? Thank you

After the execution of ingest pipeline for all documents in a bulk, the coordinate code groups these documents by shard and send them to different shards, each shard processes its documents in parallel, so at least in shard level, we process the documents in a bulk request in parallel. But I think this RFC will not touch the processing logic in each shard which processes the create/update/delete operations for the same document in order, so it's not harmful.

@model-collapse
Copy link

@reta What is your estimation where the circuit breaking will happen? If you mean it will happen in side the batch processor's own process, that could be, because it is impossible to estimate how much memory will be consumed by its code. Therefore, we need to let the users to configure the batch_size in the bulk_api.

@reta
Copy link
Collaborator

reta commented Mar 15, 2024

@reta What is your estimation where the circuit breaking will happen?

@model-collapse there are no estimates the one could make upfront, this is purely operational issue (basically depends on what is going on at the moment)

Therefore, we need to let the users to configure the batch_size in the bulk_api.

Due to previous comment, users have difficulties with that: same batch_size may work now and may not 10m from now (if cluster is under duress). The issue referred there has all the details.

@chishui
Copy link
Contributor Author

chishui commented Mar 21, 2024

Benchmark Results on Batch ingestion with Neural Search Processors

We implemented the PoC of batch ingestion locally and enabled the capability of sending batch documents to remote ML servers. We used "opensearch-benchmark" to benchmark both batch enabled and disabled situation on different ML servers (SageMaker, Cohere, OpenAI) and here are the benchmark results

Benchmark Results

Environment Setup

  • Based on OpenSearch-v2.12.0
  • OpenSearch host type: r6a.4xlarge
    • 16 vCPU
  • 1 shard
  • OpenSearch benchmark host type: c6a.4xlarge
  • OpenSearch JVM: Xms:48g, Xmx: 48g
  • Data: https://github.com/iai-group/DBpedia-Entity/. (300k text only)

SageMaker

Environment Setup

  • SageMaker host type: g5.xlarge
  • Processor: Sparse Encoding
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 65.51 260.1
Mean Throughput (docs/s) 93.96 406.12
Median Throughput (docs/s) 93.86 408.92
Max Throughput (docs/s) 99.76 443.08
Latency P50 (ms) 1102.16 249.544
Latency P90 (ms) 1207.51 279.467
Latency P99 (ms) 1297.8 318.965
Total Benchmark Time (s) 3095 770
Error Rate (%) 17.10%1 0

Cohere

Environment Setup

  • Processor: text embedding
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 72.06 74.87
Mean Throughput (docs/s) 80.71 103.7
Median Throughput (docs/s) 80.5 103.25
Max Throughput (docs/s) 83.08 107.19
Latency P50 (ms) 1193.86 963.476
Latency P90 (ms) 1318.48 1193.37
Latency P99 (ms) 1926.17 1485.22
Total Benchmark Time (s) 3756 2975
Error Rate (%) 0.47 0.03

OpenAI

Environment Setup

  • Processor: text embedding
  • model: text-embedding-ada-002
  • Benchmark Setup
    • Bulk size: 100
    • client: 1
Metrics no batch batch (batch size=10)
Min Throughput (docs/s) 49.25 48.62
Mean Throughput (docs/s) 56.71 92.2
Median Throughput (docs/s) 57.53 92.84
Max Throughput (docs/s) 60.22 95.32
Latency P50 (ms) 1491.42 945.633
Latency P90 (ms) 2114.53 1388.97
Latency P99 (ms) 4269.29 2845.97
Total Benchmark Time (s) 5150 3275
Error Rate (%) 0.17 0

Results

  1. Batch ingestion has significant higher throughput and low latency.
  2. Batch ingestion has much lower error rate comparing to non-batch result..

[1]: The errors are coming from SageMaker 4xx response which was also reported in ml-commons issue opensearch-project/ml-commons#2249

@gaobinlong
Copy link
Contributor

@andrross @sohami could you experts also help to take a look at this RFC, any comments will be appreciated, thank you!

@navneet1v
Copy link
Contributor

We do have a pull request in ml-commons opensearch-project/ml-commons#1958 which changes the http client from sync to async and it's supposed to be released in 2.14.

Have we benchmarked the performance of this change? how much is the throughput increasing after this change?

@chishui
Copy link
Contributor Author

chishui commented Apr 2, 2024

the problem with batching (at least how it is implemented currently in OS and what we've seen so far with bulk API) is that choosing the right batch size is difficult

@reta to address your concern we plan to provide an automation tool to help user run a series of benchmarks against their OS with different batch size and recommend the optimal batch size. Here is the feature link: #13009

Could you please take a look and see if your concerns are addressed, we really want to push this forward to benefit users.

@chishui
Copy link
Contributor Author

chishui commented Apr 2, 2024

Have we benchmarked the performance of this change? how much is the throughput increasing after this change?

The async http client benchmark results are attached here opensearch-project/ml-commons#1839

@chishui
Copy link
Contributor Author

chishui commented Apr 7, 2024

@reta since we only pursue batch ingestion in this RFC, and to address your concern that user will have difficulty tuning batch size, we also proposed to have a automation tool to make it easier for user opensearch-project/opensearch-benchmark#508. Is there any other things that you believe we should address before moving forward?

@reta
Copy link
Collaborator

reta commented Apr 7, 2024

@chishui I honestly don't know at what extent tool could help, you may need to provide the guide for the users to explain how it is supposed to be used. At least it may give some confidence probably.

AFAIK OpenSearch benchmarks does targeted measurements for specific operations (this is what it was designed for), but does not measure the different interleaving operational workloads (and shouldn't I think): fe running search while ingesting new documents, etc ...

@chishui
Copy link
Contributor Author

chishui commented Apr 8, 2024

@reta IMO, to have this batch ingestion feature, is from 0 to 1, that user can start to use it to accelerate their ingestion process and have fewer chances to get throttled by remote ML server (benefits are shown from the benchmark results above). Maybe it's not easy for them to find the optimal batch size initially, but they have an option and can benefit immediately once they use batch feature. Then, to have a tool to help them find an optimal batch size automatically, is from 1 to 10, that we make this feature easy to use for everyone.

you may need to provide the guide for the users to explain how it is supposed to be used

Yes, we definitely need a document on OpenSearch website when we introduce this feature explaining how the feature should be used, how it can benefit, how the tool can help.

but does not measure the different interleaving operational workloads (and shouldn't I think): fe running search while ingesting

That's what I understand as well.

@Zhangxunmt
Copy link

All documents from this request are handled by one ingest node - Is this a correct statement? For multi nodes cluster, the documents in the _bulk will be distributed to each node for ingestion?

@chishui
Copy link
Contributor Author

chishui commented Apr 16, 2024

@Zhangxunmt thanks for the comment. The RFC is only about preprocessing, all documents are handled by a single ingest node which remains the same as current behavior. After preprocessing is the indexing process, as you said, documents are distributed to different node which also remains the same as we don't touch this part of logic.

@Zhangxunmt
Copy link

What is the preprocessing? Does it mean processors in the pipeline? Recently we noticed that in neural search, the text-embedding processor in the ingest pipeline sends remote inference traffics that are proportional to the number of data nodes in the cluster. That means, the _bulk requests takes N documents, and N documents are evenly distributed among all nodes for the text-embedding processor to run remote inference for vectorization. So this means all the docs are divided into smaller batches and preprocessed in every nodes? @chishui

@chishui
Copy link
Contributor Author

chishui commented Apr 17, 2024

What is the preprocessing? Does it mean processors in the pipeline?

Yes

That means, the _bulk requests takes N documents, and N documents are evenly distributed among all nodes for the text-embedding processor to run remote inference for vectorization.

The text-embedding processor is actually run on the node which accepts "_bulk" API. When it needs to send out text for inferencing, it could route the requests to "ml" nodes depending on the plugins.ml_commons.only_run_on_ml_node setting, right?

So this means all the docs are divided into smaller batches and preprocessed in every nodes?

Basically, since we won't change inferencing API, if texts are already dispatched to every node for inferencing, with this batch enabled, batched texts are dispatched to every nodes.

@Zhangxunmt
Copy link

Zhangxunmt commented Apr 17, 2024

Got it. I think it makes sense. In most cases the ml_node setting is false. AOS doesn't have ml nodes so far so the inferencing would happen in data nodes.

Based on the prior discussion here, the pre-processing runs on the node which accepts the _bulk, but it will call "Predict" API in Ml-Commons that routes the inference traffics to all data nodes. So essentially it's still the whole cluster handling the batch of documents. So in the cases of a single text-embedding processor in a ingest pipeline, does the proposed parallel ingestion still help the performance since the processor itself is already handling docs in parallel mode?

@Zhangxunmt
Copy link

@chishui One takeaway from this issue is that we'd better to use a bigger cluster (>10 nodes) for the performance benchmarking because nodes number is direct proportional to the concurrency TPS we send to the model service. Smaller OS cluster easily reaches its hard limit in the concurrency requests and may not represent the real customer scenarios.

@chishui
Copy link
Contributor Author

chishui commented Apr 18, 2024

does the proposed parallel ingestion still help the performance since the processor itself is already handling docs in parallel mode?

@Zhangxunmt I explained the benefits of having parallel ingestion in this comment #12457 (comment). In the scenario you described, it won't help the performance.

One takeaway from this opensearch-project/ml-commons#2249 is that we'd better to use a bigger cluster

I think it's actually the opposite. Even with only one data node in the cluster, inferencing is done in thread pool and the thread pool size controls the maximum concurrent TPS. And based on our benchmark result, without batch, each document is inferenced in a single thread and can easily run into 4xx from sagemaker. But with batch, each batch is in a single thread and can less likely run into 4xx from sagemaker. So with bigger cluster, you would have higher concurrency and would get 4xx more likely and batch can definitely help with this.

@mgodwan mgodwan added ingest-pipeline and removed Indexing Indexing, Bulk Indexing and anything related to indexing Indexing:Performance labels Apr 25, 2024
@dblock
Copy link
Member

dblock commented Apr 29, 2024

I am late to this RFC, but wanted to highlight #13306 (comment) for those who commented here - if you can please take a quick look? I think the API proposed should have been discussed a little more, starting with the inconsistent use of maximum_ vs. max_, but more imnportantly whether we need batch_ingestion_option at all.

@xinlamzn xinlamzn moved this from 2.14.0 (Release window opens April 30 2024 closes May 14 2024 ) to 3.0.0 (TBD) in OpenSearch Project Roadmap Apr 29, 2024
@xinlamzn xinlamzn moved this from 3.0.0 (TBD) to 2.14.0 (Release window opens April 30 2024 closes May 14 2024 ) in OpenSearch Project Roadmap Apr 29, 2024
dblock pushed a commit that referenced this issue Apr 30, 2024
* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
@model-collapse model-collapse moved this from 2.14.0 (Release window opens April 30 2024 closes May 14 2024 ) to Upcoming (Release version TBD) in OpenSearch Project Roadmap Apr 30, 2024
dblock pushed a commit that referenced this issue Apr 30, 2024
…13462)

* Support batch ingestion in bulk API (#12457) (#13306)

* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
(cherry picked from commit 1219c56)

* Adjust changelog item position to trigger CI

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
finnegancarroll pushed a commit to finnegancarroll/OpenSearch that referenced this issue May 10, 2024
…earch-project#13306)

* [PoC][issues-12457] Support Batch Ingestion

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rewrite batch interface and handle error and metrics

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove unnecessary change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Revert some unnecessary test change

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Keep executeBulkRequest main logic untouched

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add UT & yamlRest test, fix BulkRequest se/deserialization

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add missing java docs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove Writable from BatchIngestionOption

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more UTs

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Fix spotlesscheck

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Rename parameter name to batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Add more rest yaml tests & update rest spec

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Remove batch_ingestion_option and only use batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Throw invalid request exception for invalid batch_size

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

* Update server/src/main/java/org/opensearch/action/bulk/BulkRequest.java

Co-authored-by: Andriy Redko <drreta@gmail.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>

* Remove version constant

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>

---------

Signed-off-by: Liyun Xiu <xiliyun@amazon.com>
Signed-off-by: Liyun Xiu <chishui2@gmail.com>
Co-authored-by: Andriy Redko <drreta@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request ingest-pipeline RFC Issues requesting major changes
Projects
OpenSearch Project Roadmap
Upcoming (Release version TBD)
Development

No branches or pull requests

10 participants