Currently, the TrackingEventProcessor only supports a single thread to process events. Instead, it should be able to use a pool of threads to process events concurrently.
A single thread will always be used to read events from the stream. That thread will calculate the segmentID for each incoming event, based on their sequencing policy and a hash function. Two events with the same sequencing policy value will always be processed by the same thread.
The other threads will process their (portion of) the message queue, updating the token belonging to their segment.
Existing tokens should implement a "merge" method, which combines to tokens into a single one. That merged token will allow all message to be delivered that have not been seen by all segments. A token should also implement a "contains" method, which allows the token to compare itself with another token and indicate if the message carrying that token has already been processed.
This allows the fetcher thread to "merge" all segment tokens into a single one, and open a stream from the event store. Each processor thread can use its own segment token and check if it already "contains" the incoming message. If so, the message has already been processed, and can be safely skipped.
Preferably, the processor threads use a bounded queue, to prevent the reader thread to fill the heap when it outperforms consumers. The Disruptor could be a good implementation for this mechanism.