Skip to content

Commit

Permalink
docs: fix minor typos (#2270)
Browse files Browse the repository at this point in the history
  • Loading branch information
roggervalf committed Nov 10, 2023
1 parent d26fcc5 commit 9a469da
Show file tree
Hide file tree
Showing 9 changed files with 8 additions and 16 deletions.
4 changes: 2 additions & 2 deletions docs/gitbook/bull-3.x-migration/compatibility-class.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Compatibility class

The Queue3 class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.
The `Queue3` class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.

Differences in interface include

Expand All @@ -11,7 +11,7 @@ Differences in interface include
* name property is mandatory in `add()` method
* concurrency is moved from `process()` argument to queue options

Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and getting saved to Redis as is. When job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.

The all-in-one example:

Expand Down
4 changes: 2 additions & 2 deletions docs/gitbook/bull/patterns/manually-fetching-jobs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Manually fetching jobs

If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for your.
If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for you.

Manually transitioning states for jobs can be done with a few simple methods.

Expand Down Expand Up @@ -53,4 +53,4 @@ if (nextJobdata) {

**Note**

By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. The recommended is to extend the lock when half the lock time has passsed.
By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/message-queue.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Message queue

Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:
Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue, the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:

Server A:

Expand Down
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/persistent-connections.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Persistent connections

A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep this connections alive for as long as the service is running.
A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep these connections alive for as long as the service is running.

For example, if your service has a connection to a database, and the connection to said database breaks, you would like that service to handle this disconnection as gracefully as possible and as soon as the database is back online continue to work without human intervention.

Expand Down
3 changes: 0 additions & 3 deletions docs/gitbook/bull/patterns/redis-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,3 @@ const queue = new Queue('cluster', {

If you use several queues in the same cluster, you should use different prefixes so that the queues are evenly placed in the cluster nodes.

###

\
4 changes: 0 additions & 4 deletions docs/gitbook/bull/patterns/returning-job-completions.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,5 @@

A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of these processors and do something with it, maybe storing results in a database.

\
The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, and the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is sent to a results message queue with the result data, and this queue is listened by some other service that stores the results in a database.



\
1 change: 0 additions & 1 deletion docs/gitbook/bull/patterns/reusing-redis-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,4 @@ const opts = {

const queueFoo = new Queue("foobar", opts);
const queueQux = new Queue("quxbaz", opts);

```
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@ If you use docker you must make sure that you also add the _**.npmrc**_ file abo
```docker
WORKDIR /app
ADD .npmrc /app/.npmr
ADD .npmrc /app/.npmrc
```
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/observables/cancelation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Cancellation

As mentioned, Observables allows for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:
As mentioned, Observables allow for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand Down

0 comments on commit 9a469da

Please sign in to comment.