Skip to content

[Bug] High Concurrency Causes SQLite "Database is Locked" Error, Service Crash #3442

Open
@kelvin-qin

Description

@kelvin-qin

Contact Information

No response

MaxKB Version

v1.10.8-lts

Problem Description

Environment:

Version: maxkb v1.10.8-lts

OS: Linux (observed on Ubuntu 22.04)

Resource Constraints: 8 vCPU, 16GB RAM (lower-spec machines exacerbate the issue)

Under high concurrency, Celery tasks intermittently fail with sqlite3.OperationalError: database is locked, eventually causing the entire service to crash. This occurs when multiple Celery workers attempt simultaneous write operations on SQLite. The issue is more frequent on resource-limited machines due to slower I/O and contention.

Steps to Reproduce

Under high concurrency, Celery tasks intermittently fail with sqlite3.OperationalError: database is locked, eventually causing the entire service to crash.

The expected correct result

normal.

Related log output

sqlite3.OperationalError: database is locked  
  File "django/db/backends/sqlite3/base.py", line 416, in execute
  ... [Celery Worker Stack Trace] ...

---
SQLite is unsuitable for high-write Celery backends. Redis offers:

Atomic operations & true concurrent writes.

Resilience under load (tested at 10k+ TPS).

Simpler scalability.

Additional Information

SQLite uses coarse-grained file locks for writes (one writer at a time). In high-throughput Celery scenarios:

Workers queue write transactions faster than SQLite can serialize them.

Resource starvation (CPU/RAM) extends transaction time, increasing lock contention.

Repeated lock timeouts cascade into task failures, destabilizing the service.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions