bee-queue, kue, bull
they did a great job. Their speed are increased by pipeline
and lua
scripts. However, there is room for improvement:
Redis
does not natively support job-queue, thus library logic is complex and inefficientPipeline
andlua
scripts can not completely remove round trip times, for example: library sendsblrpoplpush
command to server every job.- It is hard to monitor job's meta data from library: execution time, number of queues, ...
- It is hard to convert the library vto other languages
These basic benchmarks ran 10,000 jobs through each library, Node (v10.5.0), system: MacOS 10.13, ram 8G, processor 2G intel core i5. The number shown are average of 10 runs.
- bee-queue: 3017.1725 ms
- v3u: 2144.2959 ms
Read document at cheetah
yarn add v3u
Create a queue:
const { Queue } = require('v3u')
const queue = new Queue('mul', {
uri: 'cheetah://localhost:1991',
})
Create a worker:
queue.process(({ x, y}, done, progress) => {
done(null, x * y);
})
Create a producer:
queue.createJob({x: 2, y: 3}, (value) => {
// job done, value = 6
}, (p) => {
// job progress, is called multiple times
})