Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Startup loading behavior batch vs all #18

Closed
ledjon-behluli opened this issue Sep 10, 2023 · 0 comments · Fixed by #19
Closed

Startup loading behavior batch vs all #18

ledjon-behluli opened this issue Sep 10, 2023 · 0 comments · Fixed by #19
Labels
enhancement New feature or request

Comments

@ledjon-behluli
Copy link
Owner

While getting all data collected from the director grain is the more efficient way, since it involves n + 1 calls, where n is the number of
partitions (i.e. store grains), and 1 is a call from the agent to the director, and all calls to the store grains are done in parallel via Task.WhenAll. We need to keep in-mind the potential size of the whole tuple space, so when the number of partitions grows a lot, this call to the director might result in contention for ThreadPool threads and may lead to thread starvation, therefor degrading performance.

An alternative is for the director to expose an IAsyncEnumerable way to load the data, where each call to it will result in a single batch of loading the data. This "batch" basically means the director calls the store grains one-by-one and streams back the result which the agent call append to its in-memory dataset. This does result in a slower loading of the whole tuple space, as there are 2*n calls, each time a call to the director + 1 call to the partition of a store, but ultimately should not lead to potential thread starvation.

We should have a configurable behavior for this in the SpaceClientOptions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant