Skip to content

Commit

Permalink
GitBook: [#122] No subject
Browse files Browse the repository at this point in the history
  • Loading branch information
roggervalf authored and gitbook-bot committed Sep 20, 2022
1 parent 6f35126 commit f55e94c
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions docs/gitbook/guide/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ myWorker.on('failed', (job: Job) => {
});
```

The events above are local for the workers that actually completed the jobs, however, in many situations you want to listen to all the events emitted by all the workers in one single place. For this you can use the [QueueEvents](https://github.com/taskforcesh/bullmq/blob/master/docs/gitbook/api/bullmq.queueevents.md) class:
The events above are local for the workers that actually completed the jobs, however, in many situations you want to listen to all the events emitted by all the workers in one single place. For this you can use the [QueueEvents](../api/bullmq.queueevents.md) class:

```typescript
import { QueueEvents } from 'bullmq';
Expand All @@ -51,9 +51,9 @@ queueEvents.on('progress', ({ jobId, data }: { jobId: string; data: number | obj
The QueueEvents class is implemented using [Redis streams](https://redis.io/topics/streams-intro). This has some nice properties, for example, it provides guarantees that the events are delivered and not lost during disconnections such as it would be the case with standard pub-sub.

{% hint style="danger" %}
The event stream is auto-trimmed so that its size does not grow too much, by default it is ~10.000 events, but this can be configured with the `streams.events.maxLen` option.
The event stream is auto-trimmed so that its size does not grow too much, by default it is \~10.000 events, but this can be configured with the `streams.events.maxLen` option.
{% endhint %}

## Read more:

- 💡 [Queue Events API Reference](https://github.com/taskforcesh/bullmq/blob/master/docs/gitbook/api/bullmq.queueevents.md)
* 💡 [Queue Events API Reference](https://api.docs.bullmq.io/classes/QueueEvents.html)
4 changes: 2 additions & 2 deletions docs/gitbook/guide/queuescheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ This class automatically moves delayed jobs back to the waiting queue when it is
You need at least one QueueScheduler running somewhere for a given queue if you require functionality such as delayed jobs, retries with backoff and rate limiting.
{% endhint %}

The reason for having this functionality in a separate class instead of in the workers \(as in Bull 3.x\) is because whereas you may want to have a large number of workers for parallel processing, for the scheduler you probably only want a couple of instances for each queue that requires delayed or stalled checks. One will be enough but you can have more just for redundancy.
The reason for having this functionality in a separate class instead of in the workers (as in Bull 3.x) is because whereas you may want to have a large number of workers for parallel processing, for the scheduler you probably only want a couple of instances for each queue that requires delayed or stalled checks. One will be enough but you can have more just for redundancy.

{% hint style="warning" %}
It is ok to have as many QueueScheduler instances as you want, just keep in mind that every instance will perform some bookkeeping so it may create some noticeable CPU and IO usage in your Redis instances.
{% endhint %}

## Read more:

- 💡 [Queue Scheduler API Reference](https://github.com/taskforcesh/bullmq/blob/master/docs/gitbook/api/bullmq.queuescheduler.md)
* 💡 [Queue Scheduler API Reference](https://api.docs.bullmq.io/classes/QueueScheduler.html)

0 comments on commit f55e94c

Please sign in to comment.