Task Queue in Go using Redis Streams
Relationship between task name and stream naming:
[task]:[stream]
# Example
upload:default
upload:result
No, cause we want to queue to have hetereogious task types.
- The
WorkerPool
aggregates the job channel and result channel. - User enqueues jobs directly onto the job channel that is owned by worker pool.
- The worker pool only calls a worker function to execute the job.
- Put result into result channel
- A client enqueue a job onto the job stream.
- Worker pool consumes messages from job stream and puts it on a local job channel
- A set of workers then consumes jobs from this channel and processes the job function.
- The worker that processed the job then puts the result on the result channel.
- The worker pool then enqueues each result onto the redis result stream.
- The client then listens to the result stream from the distributes data store.
- The worker pool is responsible for cleaning/updating the stream after a success or failure.
- The worker pool write to
taskStream
while the worker function only reads fromtaskStream
. This ensures confinement by keeping the scope small. - The broker shares state between the client and the worker pool.
Why am I adding a client abstraction to aggregate the broker and worker pool. I can literally just communicate with the broker interface directly
Detach the workerpool and distributed queue by adding an interface we will call Broker