-
Notifications
You must be signed in to change notification settings - Fork 2
adjust unique job comments and add tests #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Updates documentation and tests to reflect/verify the intended semantics of “unique” jobs (including while running and during retries), plus a small test-infra Makefile update.
Changes:
- Clarify unique-job semantics in
enqueue.godocstrings andREADME.md. - Add tests ensuring unique locks prevent duplicate enqueues while a job is actively processing (keyed and non-keyed).
- Switch test infra commands from
docker-composetodocker composein the Makefile.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
enqueue_test.go |
Expands unique-job test commentary and adds new tests asserting the uniqueness lock is held during active processing. |
enqueue.go |
Updates public API comments for EnqueueUnique* to describe uniqueness behavior across queue/processing/retry/dead. |
README.md |
Adds/extends documentation describing when the unique Redis key is held and cleared. |
Makefile |
Updates test setup/teardown to use Docker Compose v2 CLI (docker compose). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // While a job is enqueued, being processed, or present in the retry queue, the unique lock is held | ||
| // and another job with the same name and key cannot be enqueued. The unique key is removed only after | ||
| // the job finishes (or is moved to the dead queue), at which point a new unique job may be enqueued. | ||
| // In order to add robustness to the system, jobs are only unique for 24 hours after they're enqueued — | ||
| // this is mostly relevant for scheduled jobs. |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same documentation issue as EnqueueUnique: the unique key has a 24h TTL in the Lua script (redis.go:381) and can expire before the job completes/reaches dead, so the lock is not guaranteed to persist for the full lifecycle. Please reflect this TTL/expiry behavior in this doc comment (or change the implementation to refresh TTL as needed).
| // While a job is enqueued, being processed, or present in the retry queue, the unique lock is held | ||
| // and another job with the same name and arguments cannot be enqueued. The unique key is removed | ||
| // only after the job finishes (or is moved to the dead queue), at which point a new unique job may | ||
| // be enqueued. In order to add robustness to the system, jobs are only unique for 24 hours after | ||
| // they're enqueued — this is mostly relevant for scheduled jobs. |
Copilot
AI
Jan 22, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment says the unique key is removed only after the job finishes or is moved to the dead queue, but the Redis unique key is also set with a 24h TTL (see redis.go:366) and is not refreshed during processing/retries. That means the lock can expire before completion for long-running or long-delayed jobs. Please clarify the docstring to mention the TTL/possible expiry (or adjust the implementation to refresh the TTL while the job exists).
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
No description provided.