Skip to content
This repository has been archived by the owner on Sep 16, 2021. It is now read-only.

Explore explicitly modelling processing domain around batching #23

Open
alechenninger opened this issue Feb 26, 2016 · 0 comments
Open

Comments

@alechenninger
Copy link
Contributor

@kahowell and I agreed making the batching mechanisms currently present in the lightblue implementation more explicit would improve code maintainability. The current model of returning Future from DocumentEvent#lookupDocument, Notification#toDocumentEvents, and Message#process is extremely flexible, but perhaps to a fault. The Future implementation in BulkLightblueRequester is robust in that it batches requests with optimum efficiency behind an unimposing API, but it is a bit magical and foreign to the uninitiated.

In addition, batching is a core part of the problem domain. In almost all imaginable implementations, processing events in bulk will be more efficient than processing messages one at a time, and we've deliberately made other API decisions around batch processing (for example, all repository APIs use Collections). The Future based APIs allow for the concurrency necessary to do batching transparently, but they also allow for just about any usage. This flexibility is unnecessary, and we could likely better serve the project with an API more specific to the batching problem domain.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant