Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch unordered #63

Closed
wants to merge 9 commits into from
Closed

Batch unordered #63

wants to merge 9 commits into from

Conversation

ahadadi
Copy link
Collaborator

@ahadadi ahadadi commented Jan 20, 2017

This version of batchUnordered treats the list of elements as a balanced binary tree, by leveraging the heap-on-array structure, aiming to make the recursion depth O(log(n)) instead of O(n).
It uses an AtomicIntegerArray to maintain the set of pending nodes, and uses an auxiliary AtomicLong to get the index of the pending node which is left-most and highest in the tree.
The algorithm does a Pre-order traversal of the tree. When it encounters a node processed by another thread, it hops to the left-most, highest node in the tree which is not yet processed. This increases the chance that the recursion will be shallow, and reduces the number of times an already processed node is encountered.

@oshai
Copy link
Contributor

oshai commented Jan 21, 2017

How about doing it without recursion to prevent stackoverflow?

@ahadadi
Copy link
Collaborator Author

ahadadi commented Jan 21, 2017

I was not able to implement it without recursion due to the nature of ComposableFuture, i.e. without using anything but map / flatMap on the produced future.
Can you write down a non recursive solution?

…o a subtree which is being processed by another flow.
…from the leftmost highest node when done traversing a subtree.
@ahadadi ahadadi closed this Jan 23, 2017
@ahadadi ahadadi deleted the batch_unordered branch January 23, 2017 06:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants