Skip to content

Commit

Permalink
docs: Improve documentation grammar, format, etc. (#2158)
Browse files Browse the repository at this point in the history
  • Loading branch information
sambostock committed Nov 18, 2023
1 parent d6c31de commit 25090ab
Show file tree
Hide file tree
Showing 66 changed files with 293 additions and 291 deletions.
4 changes: 2 additions & 2 deletions docs/gitbook/bull-3.x-migration/compatibility-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ Differences in interface include

* fixed order of `add()` and `process()` method arguments
* class instantiation requires use of the `new` operator
* interfaces for Queue and Job options and Job class do not have wrappers and used directly
* interfaces for Queue and Job options and Job class do not have wrappers and are used directly
* there's no `done` argument expected in `process()` callback anymore; now the callback must always return a `Promise` object
* name property is mandatory in `add()` method
* concurrency is moved from `process()` argument to queue options

Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
Functional differences generally include only the absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value using `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.

The all-in-one example:

Expand Down
2 changes: 0 additions & 2 deletions docs/gitbook/bull/patterns/custom-backoff-strategy.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,5 +112,3 @@ myQueue.add({ msg: 'Specific Error' }, {
}
});
```

\
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/manually-fetching-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,4 +53,4 @@ if (nextJobdata) {

**Note**

By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds. If it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
10 changes: 5 additions & 5 deletions docs/gitbook/bull/patterns/persistent-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ A crucial feature for a subsystem in a microservice architecture is that it shou

For example, if your service has a connection to a database, and the connection to said database breaks, you would like that service to handle this disconnection as gracefully as possible and as soon as the database is back online continue to work without human intervention.

Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever, this behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)
Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever. This behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)

In the context of Bull, we have normally two different cases that are handled differently. 
In the context of Bull, we have normally two different cases that are handled differently.

### Workers

A worker is consuming jobs from the queue as fast as it can. If it loses the connection to Redis we want the worker to "wait" until Redis is available again. For this to work we need to understand an important setting in our Redis options (which are handled by ioredis):

#### maxRetriesPerRequest
#### `maxRetriesPerRequest`

This setting tells the ioredis client how many times to try a command that fails before throwing an error. So even though Redis is not reachable or offline, the command will be retried until this situation changes or the maximum number of attempts is reached.

Expand All @@ -22,11 +22,11 @@ This guarantees that the workers will keep processing forever as long as there i

### Queue

A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. has usually different requirements as the worker. 
A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. usually has different requirements from the worker.

For example, say that you are adding jobs to a queue as the result of a call to an HTTP endpoint. The caller of this endpoint cannot wait forever if the connection to Redis happens to be down when this call is made.

Therefore the **maxRetriesPerRequest** setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.
Therefore the `maxRetriesPerRequest` setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.



1 change: 0 additions & 1 deletion docs/gitbook/bull/patterns/returning-job-completions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@
A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of these processors and do something with it, maybe storing results in a database.

The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, and the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is sent to a results message queue with the result data, and this queue is listened by some other service that stores the results in a database.

14 changes: 7 additions & 7 deletions docs/gitbook/bullmq-pro/batches.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ description: Processing jobs in batches

# Batches

It is possible to configure the workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called batch) in one go.
It is possible to configure workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called _batch_) in one go.

Workers using batches have slightly different semantics and behavior than normal workers, so read carefully the following examples to avoid pitfalls.

In order to enable batches you must pass the batch option with a size representing the maximum amount of jobs per batch:
In order to enable batches you must pass the `batches` option with a size representing the maximum amount of jobs per batch:

```typescript
const worker = new WorkerPro(
Expand All @@ -26,7 +26,7 @@ const worker = new WorkerPro(
```

{% hint style="info" %}
There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch, so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
{% endhint %}

### Failing jobs
Expand Down Expand Up @@ -54,18 +54,18 @@ const worker = new WorkerPro(
);
```

Only the jobs that are `setAsFailed` will fail, the rest will be moved to complete when the processor for the batch job completes.
Only the jobs that are `setAsFailed` will fail, the rest will be moved to _complete_ when the processor for the batch job completes.

### Handling events

Batches are handled by wrapping all the jobs in a batch into a dummy job that keeps all the jobs in an internal array. This approach simplifies the mechanics of running batches, however, it also affects things like how events are handled. For instance, if you need to listen for individual jobs that have completed or failed you must use global events, as the event handler on the worker instance will only report on the events produced by the wrapper batch job, and not the jobs themselves.

It is possible, however, to call the getBatch function in order to retrieve all the jobs that belong to a given batch.
It is possible, however, to call the `getBatch` function in order to retrieve all the jobs that belong to a given batch.

```typescript
worker.on('completed', job => {
const batch = job.getBatch();
e;
// ...
});
```

Expand All @@ -82,7 +82,7 @@ queueEvents.on('completed', (jobId, err) => {

### Limitations

Currently, all worker options can be used with the batches, however, there are some unsupported features that may be implemented in the future:
Currently, all worker options can be used with batches, however, there are some unsupported features that may be implemented in the future:

- [Dynamic rate limit](https://docs.bullmq.io/guide/rate-limiting#manual-rate-limit)
- [Manually processing jobs](https://docs.bullmq.io/patterns/manually-fetching-jobs)
Expand Down
8 changes: 4 additions & 4 deletions docs/gitbook/bullmq-pro/groups/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Groups

Groups allows you to use only one queue yet distribute the jobs among groups so that the jobs are processed one by one relative to the group they belong to.
Groups allows you to use a single queue while distributing the jobs among groups so that the jobs are processed one by one relative to the group they belong to.

For example, imagine that you have 1 queue for processing video transcoding for all your users, you may have thousands of users in your application. You need to offload the transcoding operation since it is lengthy and CPU consuming. If you have many users that want to transcode many files, then in a non-grouped queue one user could fill the queue with jobs and the rest of the users will need to wait for that user to complete all its jobs before their jobs get processed.

Expand All @@ -18,9 +18,9 @@ If you only use grouped jobs in a queue, the waiting jobs list will not grow, in
There is no hard limit on the amount of groups that you can have, nor do they have any impact on performance. When a group is empty, the group itself does not consume any resources in Redis.
{% endhint %}

Another way to see groups is like "virtual" queues. So instead of having one queue per "user" you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.
Another way to see groups is like "virtual" queues. So instead of having one queue per "user", you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.

In order to use the group functionality just use the group property in the job options when adding a job:
In order to use the group functionality, use the group property in the job options when adding a job:

```typescript
import { QueuePro } from '@taskforcesh/bullmq-pro';
Expand Down Expand Up @@ -48,7 +48,7 @@ const job2 = await queue.add(
);
```

In order to process the jobs, just use a pro worker as you normally do with standard workers:
In order to process the jobs, use a pro worker as you normally do with standard workers:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand Down
8 changes: 4 additions & 4 deletions docs/gitbook/bullmq-pro/groups/concurrency.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Concurrency

By default, there is no limit on the number of jobs that the workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.
By default, there is no limit on the number of jobs that workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.

It is possible to constraint how many jobs are allowed to be processed concurrently per group, so for example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group, you could have any number of concurrent jobs as long as they are not from the same group.
It is possible to constrain how many jobs are allowed to be processed concurrently per group. For example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group; you could have any number of concurrent jobs as long as they are not from the same group.

You enable the concurrency setting like this:
The concurrency factor is configured as follows:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand All @@ -13,7 +13,7 @@ const worker = new WorkerPro('myQueue', processFn, {
group: {
concurrency: 3 // Limit to max 3 parallel jobs per group
},
concurrency: 100
concurrency: 100,
connection
});
```
Expand Down
9 changes: 3 additions & 6 deletions docs/gitbook/bullmq-pro/groups/max-group-size.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

It is possible to set a maximum group size. This can be useful if you want to keep the number of jobs within some limits and you can afford to discard new jobs.

When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown, that you can catch and ignore if you do not care about it.
When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown that you can catch and ignore if you do not care about it.

You can use the "maxSize" option when adding jobs to a group like this:
You can use the `maxSize` option when adding jobs to a group like this:

```typescript
import { QueuePro, GroupMaxSizeExceededError } from '@taskforcesh/bullmq-pro';
Expand All @@ -25,11 +25,8 @@ try {
throw err;
}
}

```



{% hint style="info" %}
The maxSize option is not yet available for "addBulk".
The `maxSize` option is not yet available for `addBulk`.
{% endhint %}
10 changes: 5 additions & 5 deletions docs/gitbook/bullmq-pro/groups/pausing-groups.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,28 @@

BullMQ Pro supports pausing groups globally. A group is paused when no workers will pick up any jobs that belongs to the paused group. When you pause a group, the workers that are currently busy processing a job from that group, will continue working on that job until it completes (or failed), and then will just keep idling until the group has been resumed.

Pausing a group is performed by calling the _**pauseGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:
Pausing a group is performed by calling the `pauseGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:

```typescript
await myQueue.pauseGroup('groupId');
```

{% hint style="info" %}
Even if the groupId does not exist at that time, the groupId will be added in our paused list as a group could be ephemeral
Even if the `groupId` does not exist at that time, the `groupId` will be added in our paused list as a group could be ephemeral
{% endhint %}

{% hint style="warning" %}
It will return false if the group is already paused.
`pauseGroup` will return `false` if the group is already paused.
{% endhint %}

Resuming a group is performed by calling the _**resumeGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:
Resuming a group is performed by calling the `resumeGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:

```typescript
await myQueue.resumeGroup('groupId');
```

{% hint style="warning" %}
It will return false if the group does not exist or when the group is already resumed.
`resumeGroup` will return `false` if the group does not exist or when the group is already resumed.
{% endhint %}

## Read more:
Expand Down
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/groups/prioritized.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Prioritized intra-groups

BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided together.
BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided _together_.

```typescript
await myQueue.add(
Expand Down
6 changes: 3 additions & 3 deletions docs/gitbook/bullmq-pro/groups/rate-limiting.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A useful feature when using groups is to be able to rate limit the groups independently of each other, so you can evenly process the jobs belonging to many groups and still limit how many jobs per group are allowed to be processed by unit of time.

The way the rate limiting works is that when the jobs for a given group exceed the maximum amount of jobs per unit of time that particular group gets rate limited. The jobs that belongs to this particular group will not be processed until the rate limit expires.
The way the rate limiting works is that when the jobs for a given group exceed the maximum amount of jobs per unit of time, that particular group gets rate limited. The jobs that belong to this particular group will not be processed until the rate limit expires.

For example "group 2" is rate limited in the following chart:

Expand All @@ -28,9 +28,9 @@ const worker = new WorkerPro('myQueue', processFn, {

### Manual rate-limit

Sometimes is useful to rate-limit a group manually instead of based on some static options. For example, if you have an API that returns 429 (Too many requests), and you want to rate-limit the group based on that response.
Sometimes it's useful to rate-limit a group manually instead of based on some static options. For example, if you have an API that returns `429 Too Many Requests`, and you want to rate-limit the group based on that response.

For this purpose, you can use the worker method **rateLimitGroup** like this:
For this purpose, you can use the worker method `rateLimitGroup` like this:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand Down
8 changes: 4 additions & 4 deletions docs/gitbook/bullmq-pro/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

In order to install BullMQ Pro you need to use a NPM token from [taskforce.sh](https://taskforce.sh).

With the token at hand just update or create a ._**npmrc**_ file in your app repository with the following contents:
With the token at hand just update or create a `.npmrc` file in your app repository with the following contents:

```
@taskforcesh:registry=https://npm.taskforce.sh/
//npm.taskforce.sh/:_authToken=${NPM_TASKFORCESH_TOKEN}
always-auth=true
```

"NPM\_\_TASKFORCESH\_\_TOKEN" is an environment variable pointing to your token.
where `NPM__TASKFORCESH__TOKEN` is an environment variable pointing to your token.

Then just install the @taskforcesh/bullmq-pro package as you would install any other package, with npm, yarn or pnpm:
Then just install the `@taskforcesh/bullmq-pro` package as you would install any other package, with `npm`, `yarn` or `pnpm`:

```
yarn add @taskforcesh/bullmq-pro
Expand All @@ -32,7 +32,7 @@ const worker = new WorkerPro('myQueue', async job => {

### Using Docker

If you use docker you must make sure that you also add the _**.npmrc**_ file above in your **Dockerfile**:
If you use docker you must make sure that you also add the `.npmrc` file above in your `Dockerfile`:

```docker
WORKDIR /app
Expand Down

0 comments on commit 25090ab

Please sign in to comment.