Description
get_engine() in src/analytics/models.py and _pg_engine() in multiple files each call create_engine() without configuring connection pool settings. Under load, each analytics write opens a new connection and the pool grows unboundedly. Postgres has a default max_connections=100 — a spike in QR validations can exhaust all connections.
Requirements & context
- Centralise all engine creation into a single
src/db.py module with a module-level engine singleton
- Configure the pool:
pool_size=5, max_overflow=10, pool_timeout=30, pool_recycle=1800
- Add
POOL_SIZE and POOL_MAX_OVERFLOW env vars to Settings with the above defaults
- Replace all
create_engine() / get_engine() / _pg_engine() calls across the codebase to use the singleton
- Add
GET /health/db that runs SELECT 1 and returns pool stats: { "pool_size", "checked_in", "checked_out", "overflow" }
Suggested execution
git checkout -b feat/db-connection-pooling
- Create
src/db.py
- Update
src/analytics/models.py, src/etl/__init__.py, src/report_service.py
- Add
/health/db route
- Write tests
Guidelines
- PR must include:
Closes #[issue_id]
- Timeframe: 72 hours
Description
get_engine()insrc/analytics/models.pyand_pg_engine()in multiple files each callcreate_engine()without configuring connection pool settings. Under load, each analytics write opens a new connection and the pool grows unboundedly. Postgres has a defaultmax_connections=100— a spike in QR validations can exhaust all connections.Requirements & context
src/db.pymodule with a module-level engine singletonpool_size=5,max_overflow=10,pool_timeout=30,pool_recycle=1800POOL_SIZEandPOOL_MAX_OVERFLOWenv vars toSettingswith the above defaultscreate_engine()/get_engine()/_pg_engine()calls across the codebase to use the singletonGET /health/dbthat runsSELECT 1and returns pool stats:{ "pool_size", "checked_in", "checked_out", "overflow" }Suggested execution
src/db.pysrc/analytics/models.py,src/etl/__init__.py,src/report_service.py/health/dbrouteGuidelines
Closes #[issue_id]