10.0.0
-
MAJOR: Node 20 is now the minimum required version
-
MAJOR: PostgreSQL 13 is now the minimum required version. The dependency on the pgcrypto extension has been removed.
-
MAJOR: Automatic migration from v9 or lower is not currently available. A new partitioned job table was created, as well as new queue storage policies that make it difficult to honor the previous unique constraints in an upgrade scenario. The safest option is to manually move jobs from pgboss 9 to v10 via the API, or at least use the API to prepare for a manual migration via
INSERT ... SELECT
. -
MAJOR: Job retries are now opt-out instead of opt-in. The default
retryLimit
is now 2 retries. This will cause an issue for any job handlers that aren't idempotent. Consider settingretryLimit=0
on these queues if needed. -
MAJOR: Queues must now be created before API or direct SQL INSERT will work. See migration notes below. Each queue has a storage policy (see below) and represents a child table in a partitioning hierarchy. Additionally, queues store default retry and retention policies that will be auto-applied to all new jobs. See the docs for more on queue operation such as `createQueue().
standard
(default): Standard queues are the default queue policy, which supports all existing features. This will provision a dedicated job partition for all jobs with this name.short
: Short queues only allow 1 item to be queued (in created state), which replaces the previoussendSingleton()
andsendOnce()
functions.singleton
: Singleton queues only allow 1 item to be active, which replaces the previousfetch()
optionenforceSingletonQueueActiveLimit
.stately
: Stately queues are a combination ofshort
andsingleton
, only allowing 1 job to be queued and 1 job active.
-
MAJOR: The handler function in
work()
was standardized to always receive an array of jobs. One simple way to migrate a single job handler (batchSize=1) is a destructuring assignment like the following.// v9 await boss.work(queue, (job) => handler(job)) // v10 await boss.work(queue, ([ job ]) => handler(job))
-
MAJOR:
teamSize
,teamConcurrency
, andteamRefill
were removed fromwork()
to simplify worker polling use cases. As noted above,enforceSingletonQueueActiveLimit
was also removed. -
MAJOR: Dead letter queues replace completion jobs. Failed jobs will be added to optional dead letter queues after exhausting all retries. This is preferred over completion jobs to gain retry support via
work()
. Additionally, dead letter queues only make a copy of the job if it fails, instead of filling up the job table with numerous, mostly unneeded completion jobs.onComplete
option insend()
andinsert()
has been removedonComplete()
,offComplete()
, andfetchCompleted()
have been removeddeadLetter
option added tosend()
andinsert()
andcreateQueue()
-
MAJOR: Dropped the following API functions in favor of policy queues
sendOnce()
sendSingleton()
-
MAJOR: The following API functions now require name arguments
complete(name, id, data)
fail(name, id, data)
cancel(name, id)
getJobById(name, id)
-
MAJOR: The contract for
getJobById()
and theincludeMetadata
option forfetch()
andwork()
were standardized to the following.interface JobWithMetadata<T = object> { id: string; name: string; data: T; priority: number; state: 'created' | 'retry' | 'active' | 'completed' | 'cancelled' | 'failed'; retryLimit: number; retryCount: number; retryDelay: number; retryBackoff: boolean; startAfter: Date; startedOn: Date; singletonKey: string | null; singletonOn: Date | null; expireIn: PostgresInterval; createdOn: Date; completedOn: Date | null; keepUntil: Date; deadLetter: string, policy: string, output: object }
-
MAJOR: The columns in the job and archive table were renamed to standardize to snake case. A sample job table script showing these is below.
CREATE TABLE pgboss.job ( id uuid not null default gen_random_uuid(), name text not null, priority integer not null default(0), data jsonb, state pgboss.job_state not null default('created'), retry_limit integer not null default(0), retry_count integer not null default(0), retry_delay integer not null default(0), retry_backoff boolean not null default false, start_after timestamp with time zone not null default now(), started_on timestamp with time zone, singleton_key text, singleton_on timestamp without time zone, expire_in interval not null default interval '15 minutes', created_on timestamp with time zone not null default now(), completed_on timestamp with time zone, keep_until timestamp with time zone NOT NULL default now() + interval '14 days', output jsonb, dead_letter text, policy text, CONSTRAINT job_pkey PRIMARY KEY (name, id) ) PARTITION BY LIST (name)
-
MAJOR:
work()
optionsnewJobCheckInterval
andnewJobCheckIntervalSeconds
have been replaced bypollingIntervalSeconds
. The minimum value is 0.5, making 500ms the minimum allowed value. -
MAJOR:
stop()
optiondestroy
was renamed toclose
. Previously,destroy
was defaulted to false, to leave the internal connection database open which was created bystart()
. Now,close
will default to true. -
MAJOR:
noSupervisor
andnoScheduling
were renamed to a more intuitive naming convention.- If using
noSupervisor: true
to disable mainteance, instead usesupervise: false
- If using
noScheduling: true
to disable scheduled cron jobs, useschedule: false
- If using
-
MINOR: Added new function
deleteJob()
to provide fetch -> delete semantics when job throttling and/or storage is not desired. -
MINOR:
send()
andinsert()
cascade configuration from policy queues (if they exist) and then global settings in the constructor. Use the following table to help identify which settings are inherited and when.Setting API Queue Constructor retryLimit
send()
,insert()
,createQueue()
✔️ ✔️ retryDelay
send()
,insert()
,createQueue()
✔️ ✔️ retryBackoff
send()
,insert()
,createQueue()
✔️ ✔️ expireInSeconds
send()
,insert()
,createQueue()
✔️ ✔️ expireInMinutes
send()
,createQueue()
✔️ ✔️ expireInHours
send()
,createQueue()
✔️ ✔️ retentionSeconds
send()
,createQueue()
✔️ ✔️ retentionMinutes
send()
,createQueue()
✔️ ✔️ retentionHours
send()
,createQueue()
✔️ ✔️ retentionDays
send()
,createQueue()
✔️ ✔️ deadLetter
send()
,insert()
,createQueue()
✔️ -
MINOR: Added primary key to job archive to support replication use cases such as read replicas or high availability standbys.
-
MINOR: Added a new constructor option,
migrate:false
, to block an instance from attempting to migrate to the latest database schema version. This is useful if the configured credentials don't have schema modification privileges or complete control of when and how migrations are run is required. -
MINOR: The
expired
failed state has been consolidated intofailed
for simplicity. -
MINOR: Added
priority:false
option towork()
andfetch()
to opt out of priority sorting during job fetching. If a queue is very large and not using the priority feature, this may help job fetch performance. -
MINOR: Added a maintenance function,
maintain()
, if needed for serverless and/or externally scheduled maintenance use cases. -
MINOR: Added functions
isInstalled()
andschemaVersion()
-
MINOR:
stop()
will now wait for the default graceful stop timeout (30s) before resolving its promise. Thestopped
event will still emit. If you want to the original behavior, set the newwait
option tofalse
. -
MINOR: Added
id
property as an option tosend()
for pre-assigning the job id. Previously, onlyinsert()
supported pre-assignment. -
MINOR: Removed internal usage of md5() hashing function for those needing FIPS compliance.
Migration Notes
This section will contain notes and tips on different migration strategies from v9 and below to v10. Since auto-migration is not supported, there are a few manual options to get all of your jobs into v10 from v9.
API option
For each queue, use createQueue()
, fetch()
, insert()
to pull jobs from v9 and insert into v10.
SQL option
Something along the lines of INSERT INTO v10.job (...) SELECT ... FROM v9.job
is possible in SQL, but the queues have to be created first.
- Run
SELECT pgboss.create_queue(name, options)
. - Insert records into
pgboss.job
.
Hybrid option
- Create all required queues using the API
- Now that queues are created, iterate through each source queue, and insert only queued items (state=created) in SQL.
const queues = await boss.getQueues();
for (const queue of queues) {
try {
const sql = `
INSERT INTO ${targetSchema}.job (
id,
name,
priority,
data,
retry_limit,
retry_count,
retry_delay,
retry_backoff,
start_after,
singleton_key,
singleton_on,
expire_in,
created_on,
keep_until,
output,
policy
)
SELECT
id,
name,
priority,
data,
retryLimit,
retryCount,
retryDelay,
retryBackoff,
startAfter,
singletonKey,
singletonOn,
expireIn,
createdOn,
keepUntil,
output jsonb,
'${queue.policy}' as policy
FROM ${sourceSchema}.job
WHERE name = '${queue.name}'
AND state = 'created'
ON CONFLICT DO NOTHING
`;
const { rowCount } = await client.query(sql);
if (rowCount) {
log.info(`pg-boss v10 migration: Migrated ${rowCount} jobs in queue ${queue.name}`);
}
} catch (error) {
log.error(`pg-boss v10 migration: error copying jobs from '${queue.name}': ${error.message}`);
}
}
What's Changed
- Update readme.md by @LavredisG in #431
- v10 by @timgit in #425
New Contributors
- @LavredisG made their first contribution in #431
Full Changelog: 9.0.3...10.0.0