Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Searchable Remote Index #2900

Closed
Tracked by #2919
sohami opened this issue Apr 14, 2022 · 13 comments
Closed
Tracked by #2919

[RFC] Searchable Remote Index #2900

sohami opened this issue Apr 14, 2022 · 13 comments
Labels
discuss Issues intended to help drive brainstorming and decision making distributed framework enhancement Enhancement or improvement to existing feature or request RFC Issues requesting major changes

Comments

@sohami
Copy link
Collaborator

sohami commented Apr 14, 2022

Today, the data indexed in OpenSearch is stored locally on the storage attached to the instance which limits it to the available instance storage. To store more data, users have to scale their hot tier which is costly. To provide cost savings, users can opt to store data in remote store using the snapshot mechanism in the configured repository. To perform analysis on the data which is stored in the remote store user will need to restore entire index to the hot tier which will again require them to scale the hot tier based on index sizes that needs to be restored and can be costly. This RFC proposes a mechanism to directly create a read-only index backed by the data in remote store in snapshot format and allow performing search requests on it without downloading all the data from remote store. This mechanism will also serve as core building blocks of search path for other remote store project currently in-flight [1][2].

Proposed Solution

We propose to provide capability to search a remote index backed by snapshot in a configured repository of different cloud providers (AWS S3, GCS, etc) supported by OpenSearch. This will be built using the learnings from Ultrawarm technology available in AWS managed OpenSearch service. The index data will not be all completely downloaded to the local store instead will be fetched on demand based on the search request. To provide better performance of repeated search queries, there will be local disk based LRU cache mechanism which will be used to store already downloaded data which will reduce the overhead of round trip to remote store and download latency. Users will need to configure nodes with a pre-defined cache size which can be used for remote indices shard. Note: To provide flexibility a node can be used for placing both remote index shard as well as regular index shard.
We also plan to provide support for configuring a special node attribute which can be used to only allocate shards of these remote indices. This will help to provide resource isolation between hot tier workload and search workloads on remote indices as they will be executed on specific nodes based on shard allocation. It will also help to keep isolation between local storage used for hot indices and cached data for remote indices. To that front, we will need changes in Shard allocation and recovery mechanisms for remote indices shards. Some of the things to consider is that these shards being backed by remote store will not need to perform peer recovery from primary shards as it can be done directly from remote store. These shards data durability will be tied to remote store and not node loss on which shards are assigned.

Apart from functional aspects, there are few performance related items which will be useful to improve search latency on these indices a) Concurrent segment search. The support for this was done as part of [3] with exceptions for aggregations. If this is enabled for hot indices then it should work out of the box for remote indices too, b) Prefetching support for data from remote store that will be required for serving the search request. This optimization will be focused on optimizing aggregation based queries. During execution first the docIds are found and scored (if needed) and then docValues for matched documents are fetched to perform the aggregations at shard level. Fetching these docValues sequentially will be expensive for a cold query. We can built a mechanism to collect the corresponding docValues infos for the matched document ids which will be needed during aggregation phase. Then prefetch those docValues in parallel rather than sequentially (during aggregation phase) from remote store to boost up the performance of search query.

Feedback Requested

  • In phase 1, we plan to support remote indices in snapshot format. We would like to get feedback from community on what all snapshot repositories are mostly commonly used by community users for snapshot.
  • Do we need the flexibility to support allowing the creation of remote indices from multiple different repositories or having a global single repository for such indices is good enough.
  • Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places
  • What kind of search workloads usually the users see running on remote indices.

Next Steps

Based on feedback from the community we will start on the high level design to figure out the user experience, end to end flow to support remote indices backed by snapshot.

Things Not Covered

This proposal doesn’t intend to cover below items for now:

  • Searching on remote indices in some other format than snapshot.
  • Supporting hybrid shards copies for an index i.e. some shard performing only write while others performing read. Using this mechanism all the shards copies for an index will be read-only
  • Mechanisms to migrate data from local storage to remote store or snapshot store
  • Full end to end storage tiering solution

References

  1. Feature Proposal : Pluggable Translog  #1319
  2. [Feature Proposal] Add Remote Storage Options for Improved Durability #1968
  3. Concurrent Searching (Experimental) #1500
@sohami sohami added enhancement Enhancement or improvement to existing feature or request untriaged labels Apr 14, 2022
@andrross andrross added discuss Issues intended to help drive brainstorming and decision making distributed framework RFC Issues requesting major changes and removed untriaged labels Apr 14, 2022
@dblock
Copy link
Member

dblock commented Apr 19, 2022

So the tl;dr is that storage would be backed by remote storage with a cache. This makes sense. The part that I wanted to comment on is: Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places? I feel like the end game is a tiered storage API that lets me create/update options on an index that say things like "store this on disk" or "store this in S3" or "store this on disk and replicate to S3 as a backup", .. The other way to look at this could be "configure snapshots" (exists today) and "mounting snapshots as read-only indexes" (aka "create an index backed by snapshots"). In either case, I would work backwards from that end state in terms of actual APIs and try to not introduce anything that we will knowingly want to deprecate. It would be good to read actual API proposals and be able to compare.

@muralikpbhat
Copy link

Nice proposal. Few questions/suggestions:

In phase 1, we plan to support remote indices in snapshot format.

Given the remote storage proposal, would it make sense to support search on that data instead of expecting a snapshot format. May be support both?

Also related suggestion: Since the remote storage will also support writes in remote storage, should we avoid the assumption on read-only index from the beginning on read side? Might be OK to start with read only index, but would be great if design is extensible to cover the read-write index also.

Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places

If there is continuous sync to the remote storage, we may not need an explicit api to create remote index. However, if we place a restriction that it has to be 'read only' before we read, we might need a way for customer to mark it read only and optionally an api to delete it from hot cluster?
If we are depending on snapshot as proposed here, we definitely don't need an api as we can rely on snapshot api itself to create the remote index(implicitly).

I did not understand the second part of the above question "should we extend restore API for it or provide support in both the places"

@andrross
Copy link
Member

Given the #1968 proposal, would it make sense to support search on that data instead of expecting a snapshot format. May be support both?

The goal is definitely to support both. We don't plan to design and implement this with the assumption that the remote indexes will always be read-only. The ability to search the snapshot format seems like a good incremental feature to deliver on the way to supporting reading from the writable remote storage to be delivered by #1968.

we can rely on snapshot api itself to create the remote index(implicitly)

Can you elaborate a bit more on what you mean by this?

@sohami
Copy link
Collaborator Author

sohami commented Apr 21, 2022

Nice proposal. Few questions/suggestions:

In phase 1, we plan to support remote indices in snapshot format.

Given the remote storage proposal, would it make sense to support search on that data instead of expecting a snapshot format. May be support both?

The end goal is to make the search on remote backed index agnostic on underlying data format. OS engine communicates with the data via a Directory layer which can have different implementation based on the underlying storage format to support different formats. Here the plan was to focus on snapshot based format to start with and build other format supports once they are available. But yes the plan is to have extensible design.

Also related suggestion: Since the remote storage will also support writes in remote storage, should we avoid the assumption on read-only index from the beginning on read side? Might be OK to start with read only index, but would be great if design is extensible to cover the read-write index also.

Related to read-only index, this is my thought. Currently OpenSearch doesn't support mix of write and read shard for an index. #2578 talks about supporting that in long run where an index can have write only shards and read only shards, then appropriate requests can be routed to those shards. Once we have that capability then the mechanism developed here can be used by read-only shards for serving the search requests for remote indices. Until then we can use this mechanism to power read capability for an index backed by snapshot data. Also once the writable remote storage is available #1968, we can add capability to refresh the newly available data in remote store on reader side.

Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places

If there is continuous sync to the remote storage, we may not need an explicit api to create remote index. However, if we place a restriction that it has to be 'read only' before we read, we might need a way for customer to mark it read only and optionally an api to delete it from hot cluster? If we are depending on snapshot as proposed here, we definitely don't need an api as we can rely on snapshot api itself to create the remote index(implicitly).

I may have confused by saying "creating remote indices", what I meant here was a mechanism to resurrect an index backed by snapshot for performing searches. Indices in snapshot will not be part of cluster state and not available for search so we need some mechanism to bring it in cluster state. The mechanism for that can be via a separate API like attachIndex or mountIndex or we can use the snapshot restore API but control how actually the shards are hydrated (i.e. it doesn't download all the data similar to hot)

I did not understand the second part of the above question "should we extend restore API for it or provide support in both the places"

Clarified above

@muralikpbhat
Copy link

Great to see we are planing to support writable storage also in future.

Thanks for the clarifications on "creating remote indices". For getting the indices back to cluster state, there are few considerations while deciding on using restore vs attach.

  1. We may not even download the shards at all if the remote storage is fast enough for retrievals. So it is different than 'restore' as we are restoring only metadata/cluster state.
  2. For indices which we want to remove from the state after taking snapshot, we need a separate 'detach' api anyway? Are we thinking of just doing a 'delete'? How do we distinguish between permanent delete (need to delete from snapshot also?) vs only cluster state delete?
  3. With remote writable storage, we may have the state always in the cluster and there is no need for detach/attach. So it is mainly needed when someone is bringing in external snapshot for search?
  4. With remote writable storage also, are we thinking of shrinking the cluster state (nice to have) by supporting detach for old indices which may not be written or queried regularly?
  5. Ideal customer experience while reading is that customer should be able to just query the data even if the shards are not attached (attaching happen behind the scenes). If the state is always backed up only in the snapshot and getting it back to cluster is always 'restore', then there is a argument for reusing restore api. However, I feel new methods (nosql for example) can emerge to store the required index details in remote storage or remote db in future and forcing it to snapshot concepts now may not be right. So having detachIndex/attachIndex api (or exportIndex/importIndex) explicitly might be better even though we might use the snapshots for it's implementation now.

@pwalleni
Copy link

We would like to get feedback from community on what all snapshot repositories are mostly commonly used by community users for snapshot.

Azure Blob Storage, please.

@elfisher
Copy link

Very excited to see this proposal @sohami! I had a few questions to make sure I am understanding it.

directly create a read-only index backed by the data in remote store in snapshot format and allow performing search requests on it

Just trying to make sure I understand. The search request wouldn't be performed directly against the remote store? Is this mirroring the remote store to a local read only index? Is the actual index in the remote store also read-only?

Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places

On this +1 to what @dblock mentioned. But as a follow up, what are some reasons we would reuse the restore API versus a new one?

@sohami
Copy link
Collaborator Author

sohami commented Apr 26, 2022

Great to see we are planing to support writable storage also in future.

Thanks for the clarifications on "creating remote indices". For getting the indices back to cluster state, there are few considerations while deciding on using restore vs attach.

  1. We may not even download the shards at all if the remote storage is fast enough for retrievals. So it is different than 'restore' as we are restoring only metadata/cluster state.

Yes for this use case we don't plan to download all the shard data from remote storage while resurrecting the index to cluster state. However, that can also be extended to provide the support for full download in future if need be for any use case (like for faster search performance) which can be controlled by some index settings. The way I was thinking about restore is from user perspective that it prepares an index for serving the requests. For current hot local storage based index it involves restoring metadata and actual data to local storage. But for the remote index we can have the semantic to restore metadata only and leave data to remote store and prep shards with information to access that data.

  1. For indices which we want to remove from the state after taking snapshot, we need a separate 'detach' api anyway? Are we thinking of just doing a 'delete'? How do we distinguish between permanent delete (need to delete from snapshot also?) vs only cluster state delete?

This part is not covered in this RFC purposefully as I was expecting that too be handled as part of #1968 which is introducing remote index concept. Here we are focussing on the search capability on indices with data in remote store. Currently for snapshot based remote store backed indices, it is expected that snapshot will already exist and management of that snapshot will be done by the user. For the index, user will just perform attach and then delete to manipulate the existence of index from cluster state perspective. Delete from remote index perspective will be deleting the cluster state + any local cached data for that index. Probably we can document the behavior of delete for local store index and remote index Or we introduce a new API just for remote based index to remove it from cluster state. The full management of remote store index like delete and all will come into picture once we have full multi tier solution built out.

  1. With remote writable storage, we may have the state always in the cluster and there is no need for detach/attach. So it is mainly needed when someone is bringing in external snapshot for search?

Yes. This is the first step to provide the search functionality on indices backed by remote store. Once we have the support for #1968 completed then we can incrementally add the support for that use case as well. With remote writable storage as well, there may be a use case where user want to now mount that index for search only then the attach mechanism can be used for it.

  1. With remote writable storage also, are we thinking of shrinking the cluster state (nice to have) by supporting detach for old indices which may not be written or queried regularly?

Not currently, current focus is to add the base functionality and then build various optimization incrementally.

  1. Ideal customer experience while reading is that customer should be able to just query the data even if the shards are not attached (attaching happen behind the scenes). If the state is always backed up only in the snapshot and getting it back to cluster is always 'restore', then there is a argument for reusing restore api. However, I feel new methods (nosql for example) can emerge to store the required index details in remote storage or remote db in future and forcing it to snapshot concepts now may not be right. So having detachIndex/attachIndex api (or exportIndex/importIndex) explicitly might be better even though we might use the snapshots for it's implementation now.

I was thinking that auto attaching model will have better user experience when done via asynchronous search since all the latency of attaching the indices can be consumed behind the asynchronous call. So if we build the capability in core to attach such indices than integration with plugins like async search can be easily done.

+1 on the point about the remote index metadata stored not only in repository layer but in some external db or other storage layer and potentially in different format as well. Restore is currently tied to repository layer and to work with snapshot format, so it may make more sense to have a separate attach API for it which can be extended to support these other use cases.

@sohami
Copy link
Collaborator Author

sohami commented Apr 27, 2022

Very excited to see this proposal @sohami! I had a few questions to make sure I am understanding it.

directly create a read-only index backed by the data in remote store in snapshot format and allow performing search requests on it

Just trying to make sure I understand. The search request wouldn't be performed directly against the remote store? Is this mirroring the remote store to a local read only index? Is the actual index in the remote store also read-only?

There won't be complete download of the index data to local store. However, there will be on-demand download of the data during search request and kept in local FileCache. For this RFC we are targeting remote indices backed by snapshot so inherently that will immutable but we do plan to extend it later to remote indices which is mutable as well.

Should we have separate API for creating remote indices or should we extend restore API for it or provide support in both the places

On this +1 to what @dblock mentioned. But as a follow up, what are some reasons we would reuse the restore API versus a new one?

My initial thought was similar to restore, the new API (lets call it attach for now) will also have to perform checks, validations and create index in cluster state. The difference w.r.t regular hot snapshot restore will be for remote indices actual data won't be downloaded. So to avoid maintaining another API probably we can extend the restore API itself. However, @muralikpbhat raise a good point in context of metadata for remote indices available in different external store and not only in repository layer. So it will be worth having separate API which can encapsulate all these different store use cases whereas restore is specific to repository and snapshot layer. Seems like that will be a better experience than overloading a restore API.

@sachinpkale
Copy link
Member

OS engine communicates with the data via a Directory layer which can have different implementation based on the underlying storage format to support different formats.

As part of #2700, we are adding a RemoteDirectory interface to read/write files to/from the remote store.
Tracking issue: #2993

@andrross
Copy link
Member

@sohami @muralikpbhat Can we move the API discussion into #2922? I wrote up a couple more concrete examples and would love any feedback you have.

@sachinpkale @Bukhtawar Have you thought about the API for creating indexes that will be replicated to remote storage? I'd appreciate your feedback as well. As @dblock suggested we really should work backwards from the end state of where we want to get to eventually in terms of the various remote storage-enabled index APIs.

@andrross
Copy link
Member

andrross commented May 6, 2022

Thanks everyone for the feedback. We will keep this issue open for one more week (until 5/13/2022) and then close this discussion. We'll continue tracking the various design and implementation tasks in linked issues.

@sachinpkale
Copy link
Member

@sachinpkale @Bukhtawar Have you thought about the API for creating indexes that will be replicated to remote storage? I'd appreciate your feedback as well. As @dblock suggested we really should work backwards from the end state of where we want to get to eventually in terms of the various remote storage-enabled index APIs.

@andrross We will be using RemoteDirectory interface to store indexed data to the remote store. Please note that: the Directory interface is not fully opened yet. We have implemented only those methods that are required to store/restore the segments. We will gradually add implementation of more methods.

Bukhtawar added a commit that referenced this issue Jul 23, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : #6798
Searchable Remote Index : #2900

Implementations
Concurrent segment search for aggregations: #7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : #7203
Query cancellation support : #986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
opensearch-trigger-bot bot pushed a commit that referenced this issue Jul 23, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : #6798
Searchable Remote Index : #2900

Implementations
Concurrent segment search for aggregations: #7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : #7203
Query cancellation support : #986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
(cherry picked from commit 7769682)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Bukhtawar pushed a commit that referenced this issue Jul 24, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : #6798
Searchable Remote Index : #2900

Implementations
Concurrent segment search for aggregations: #7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : #7203
Query cancellation support : #986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.


(cherry picked from commit 7769682)

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
baba-devv pushed a commit to baba-devv/OpenSearch that referenced this issue Jul 29, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : opensearch-project#6798
Searchable Remote Index : opensearch-project#2900

Implementations
Concurrent segment search for aggregations: opensearch-project#7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : opensearch-project#7203
Query cancellation support : opensearch-project#986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
kaushalmahi12 pushed a commit to kaushalmahi12/OpenSearch that referenced this issue Sep 12, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : opensearch-project#6798
Searchable Remote Index : opensearch-project#2900

Implementations
Concurrent segment search for aggregations: opensearch-project#7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : opensearch-project#7203
Query cancellation support : opensearch-project#986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
Signed-off-by: Kaushal Kumar <ravi.kaushal97@gmail.com>
brusic pushed a commit to brusic/OpenSearch that referenced this issue Sep 25, 2023
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : opensearch-project#6798
Searchable Remote Index : opensearch-project#2900

Implementations
Concurrent segment search for aggregations: opensearch-project#7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : opensearch-project#7203
Query cancellation support : opensearch-project#986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
Signed-off-by: Ivan Brusic <ivan.brusic@flocksafety.com>
shiv0408 pushed a commit to Gaurav614/OpenSearch that referenced this issue Apr 25, 2024
I have nominated and maintainers have agreed to invite Sorabh Hamirwasia(@sohami) to be a co-maintainer. Sorabh has kindly accepted.

Sorabh has led the design and implementation of multiple features like query cancellation, concurrent segment search for aggregations and snapshot inter-operability w/remote storage in OpenSearch. Some significant issues and PRs authored by Sorabh for this effort are as follows:

Feature proposals
Concurrent segment search for aggregations : opensearch-project#6798
Searchable Remote Index : opensearch-project#2900

Implementations
Concurrent segment search for aggregations: opensearch-project#7514
Lucene changes to leaf slices for concurrent search: apache/lucene#12374
Moving concurrent search to core : opensearch-project#7203
Query cancellation support : opensearch-project#986
In total, Sorabh has authored 14 PRs going back to Aug 2021. He also frequently reviews contributions from others and has reviewed nearly 20 PRs in the same time frame. I believe Sorabh will be a valuable addition as a maintainer of OpenSearch and will continue to contribute to the success of the project going forward.

Signed-off-by: Bukhtawar Khan <bukhtawa@amazon.com>
Signed-off-by: Shivansh Arora <hishiv@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Issues intended to help drive brainstorming and decision making distributed framework enhancement Enhancement or improvement to existing feature or request RFC Issues requesting major changes
Projects
None yet
Development

No branches or pull requests

7 participants