Named lock tool for multi-process shell synchronization.
Provides acquire / release / wrap primitives keyed by a symbolic name,
using flock + detached background holder processes. Designed to be composable
with shell scripts, systemd units, and cron jobs.
namedlock acquire <name> [--wait] [--timeout <seconds>]
namedlock release <name>
namedlock status [<name>]
namedlock list
namedlock wrap <name> [--wait] [--timeout <secs>] -- <cmd> [args…]
Acquires an exclusive named lock. By default (non-blocking), exits immediately with code 1 if the lock is already held.
namedlock acquire my-job # non-blocking
namedlock acquire my-job --wait # block indefinitely
namedlock acquire my-job --wait --timeout 30 # block up to 30 sPrints the holder PID on success. --timeout implies --wait.
Releases a previously acquired lock. Always exits 0 — safe to call from cleanup handlers even if the lock was never acquired.
namedlock release my-jobPrints a human-readable table of lock state.
namedlock status # all known locks
namedlock status my-job # single lockExample output:
my-job HELD by PID 12345 (runtime: 42s)
other-lock FREE
stale-lock STALE (PID 99999 not running)
Prints one active lock name per line (for scripting).
for lock in $(namedlock list); do
echo "active: $lock"
doneAcquires the lock, runs a command, then releases the lock automatically — even if the command fails or is interrupted.
namedlock wrap my-job -- rsync -av /src/ /dst/
namedlock wrap my-job --wait --timeout 60 -- ./long-running-task.shThe wrapped command's exit code is propagated.
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Lock already held (acquire without --wait), or release/wrap error |
| 2 | Invalid arguments |
| 75 | Timeout waiting for lock (EX_TEMPFAIL — compatible with systemd restart policies) |
Resolved in priority order:
$NAMEDLOCK_DIR— user override$XDG_RUNTIME_DIR/namedlock— systemd user runtime dir (tmpfs, auto-cleaned on logout)$HOME/.cache/namedlock— persistent fallback
Per lock: <dir>/<name>.lock (held open by flock) + <dir>/<name>.pid
Lock names are restricted to [a-zA-Z0-9_.-] — safe as filenames, no path traversal.
| Variable | Purpose |
|---|---|
NAMEDLOCK_DIR |
Override lock directory |
NAMEDLOCK_LOG |
If set, append structured log lines to this file |
Log format: [YYYY-MM-DD HH:MM:SS] [LEVEL] [namedlock] …
acquire spawns a detached background process (nohup bash -c …) that opens
the lock file on fd 9 and calls flock. The holder writes its PID atomically
(via .tmp + mv) and then loops indefinitely, keeping fd 9 open. The parent
polls the pidfile (0.1 s interval) until the holder is confirmed running.
release reads the PID, sends SIGTERM, waits 0.5 s, sends SIGKILL if still
alive, then removes both files.
Stale locks (pidfile present but process dead) are detected and cleaned up
automatically on the next acquire.
The test suite uses bats (Bash Automated Testing System).
Requirements: bats, bats-support, bats-assert, bats-file (all via apt — see Dependencies).
make test
# or directly:
bats tests/namedlock.batsCoverage includes CLI validation, acquire/release lifecycle, stale lock detection,
blocking wait with timeout, wrap exit-code propagation, lock-directory resolution,
logging, and concurrent-acquire mutual exclusion.
| Dependency | Kind | Package | Notes |
|---|---|---|---|
bash ≥ 4.2 |
runtime | pre-installed | associative arrays, [[ ]] |
flock |
runtime | util-linux |
pre-installed on Debian/Ubuntu |
sleep infinity |
runtime | coreutils |
pre-installed on Debian/Ubuntu |
bats ≥ 1.5 |
test | bats |
sudo apt install bats |
bats-support |
test | bats-support |
sudo apt install bats-support |
bats-assert |
test | bats-assert |
sudo apt install bats-assert |
bats-file |
test | bats-file |
sudo apt install bats-file |
shellcheck |
lint | shellcheck |
sudo apt install shellcheck |
Install all at once:
make install-deps
# or manually:
sudo apt install bats bats-support bats-assert bats-file shellcheckCheck all dependencies are present:
make check-depsmake install # installs to ~/.local/bin/namedlock
make install PREFIX=/usr # system-wideOr symlink directly:
ln -s /path/to/tools/namedlock/bin/namedlock ~/.local/bin/namedlockFor fleet deployments or IaC-managed hosts, an Ansible role is provided under ansible/.
The install scope is controlled by the install_scope variable:
install_scope |
Install path | Runs as |
|---|---|---|
user (default) |
~/.local/bin/ |
current user |
system |
/usr/local/bin/ |
root (via sudo) |
No --become flag is needed — the playbook manages privilege escalation
internally based on install_scope.
Overridable variables (pass with -e):
| Variable | Default | Description |
|---|---|---|
install_scope |
user |
user or system |
namedlock_install_dir |
~/.local/bin |
install path for user scope |
namedlock_system_install_dir |
/usr/local/bin |
install path for system scope |
namedlock_source |
bin/namedlock (on controller) |
path to binary on control node |
namedlock_install_test_deps |
false |
also install bats/shellcheck test deps |
# Local - user install
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml --limit localhost
# Local - system install
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml --limit localhost \
-e install_scope=system
# Remote - user install
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml \
--limit myserver.example.com
# Remote - system install
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml \
--limit myserver.example.com \
-e install_scope=system
# With test/lint deps (e.g. CI host)
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml --limit localhost \
-e install_scope=system \
-e namedlock_install_test_deps=true
# Dry run
ansible-playbook -i ansible/inventory.example.yml ansible/install.yml --limit localhost \
--check --diffTo uninstall, swap the playbook — all flags mirror install:
# Local - user uninstall
ansible-playbook -i ansible/inventory.example.yml ansible/uninstall.yml --limit localhost
# Local - system uninstall
ansible-playbook -i ansible/inventory.example.yml ansible/uninstall.yml --limit localhost \
-e install_scope=system
# Remote - user uninstall
ansible-playbook -i ansible/inventory.example.yml ansible/uninstall.yml \
--limit myserver.example.com
# Remote - system uninstall
ansible-playbook -i ansible/inventory.example.yml ansible/uninstall.yml \
--limit myserver.example.com \
-e install_scope=systemCopy ansible/inventory.example.yml to ansible/inventory.<hostname>.yml and edit for your environment.
The role reads the binary from the controller (bin/namedlock) and pushes it to targets — no make required on remote hosts.
# All locks (human-readable table)
namedlock status
# Single lock
namedlock status borgmaticExample output:
borgmatic HELD by PID 12345 (runtime: 42s)
other-lock FREE
stale-lock STALE (PID 99999 not running)
namedlock list# Resolve the active lock directory
echo "${NAMEDLOCK_DIR:-${XDG_RUNTIME_DIR:+$XDG_RUNTIME_DIR/namedlock}}"
# fallback: $HOME/.cache/namedlock
ls -la "${XDG_RUNTIME_DIR:-$HOME/.cache}/namedlock/"Each lock leaves two files: <name>.lock (held open by flock) and <name>.pid.
export NAMEDLOCK_LOG=/var/log/namedlock.log
namedlock acquire borgmatic
tail -f /var/log/namedlock.logLog format: [YYYY-MM-DD HH:MM:SS] [LEVEL] [namedlock] …
The binary is not in PATH. Install it or add the install directory:
# User install
export PATH="$HOME/.local/bin:$PATH"
# System install
sudo cp bin/namedlock /usr/local/bin/namedlocknamedlock status reports STALE when the holder PID is no longer running.
Stale locks are detected and cleaned up automatically on the next acquire:
namedlock status borgmatic # shows STALE
namedlock acquire borgmatic # auto-cleans stale lock, then acquiresTo force-release manually:
namedlock release borgmatic # always exits 0, safe on stale locksWhen borgmatic is configured with namedlock hooks and a backup fails,
the after: error hook should release the lock:
commands:
- after: error
run:
- namedlock release borgmaticIf the hook itself failed (e.g. namedlock not in PATH for the systemd
service), the lock will be stale. Verify the binary is accessible in the
service's environment:
# User service
systemctl --user show borgmatic.service | grep Environment
# Release manually
namedlock release borgmaticA concurrent process holds the lock. Use --wait to block until it is
released, or --timeout to cap the wait:
namedlock acquire borgmatic --wait --timeout 60The lock was not released within the timeout period. Check what process holds it and whether it is still healthy:
namedlock status borgmatic
ps -p $(cat "${XDG_RUNTIME_DIR:-$HOME/.cache}/namedlock/borgmatic.pid")The lock directory is not writable. Override with NAMEDLOCK_DIR:
export NAMEDLOCK_DIR=/tmp/namedlock-$(id -u)
mkdir -p "$NAMEDLOCK_DIR"flock is part of util-linux, pre-installed on Debian/Ubuntu. If missing:
sudo apt install util-linux# 1. Basic acquire/release
namedlock acquire test-lock
namedlock status test-lock # HELD
namedlock list # test-lock
namedlock release test-lock
namedlock status test-lock # FREE
# 2. Conflict detection
namedlock acquire test-lock
namedlock acquire test-lock # exits 1
namedlock release test-lock
# 3. Blocking wait
namedlock acquire test-lock
( sleep 2; namedlock release test-lock ) &
namedlock acquire test-lock --wait --timeout 5 # succeeds after ~2 s
# 4. Wrap
namedlock wrap test-lock -- echo "inside lock"
namedlock status test-lock # FREE (auto-released)
# 5. Stale lock cleanup
namedlock acquire test-lock
kill "$(cat "${XDG_RUNTIME_DIR:-$HOME/.cache}/namedlock/test-lock.pid")"
namedlock acquire test-lock # detects stale, succeeds
# 6. Timeout exit code
namedlock acquire test-lock
namedlock acquire test-lock --wait --timeout 1 # exits 75
namedlock release test-lock