Summary
psqlpy raises OverflowError: out of range integral type conversion attempted when a Python dict containing integer values larger than int32 (2,147,483,647) is passed to a jsonb column, or wrapped via psqlpy.extra_types.JSONB. PostgreSQL jsonb itself imposes no such limit, and psqlpy already supports int64 for bigint columns via BigInt(...).
Reproduction
import asyncio
from psqlpy import ConnectionPool
from psqlpy.extra_types import JSONB
DSN = "postgresql://user:pw@host:5432/db"
async def main():
pool = ConnectionPool(dsn=DSN, max_db_pool_size=2)
async with pool.acquire() as conn:
await conn.execute("CREATE TEMP TABLE t (data jsonb)")
# OK — int32 max
await conn.execute("INSERT INTO t VALUES ($1)", [{"id": 2147483647}])
# FAIL — int32 max + 1
await conn.execute("INSERT INTO t VALUES ($1)", [{"id": 2147483648}])
# OverflowError: out of range integral type conversion attempted
# Also fails inside JSONB() wrapper before execute even runs:
JSONB({"id": 2147483648})
# OverflowError: out of range integral type conversion attempted
asyncio.run(main())
Tested with all of the following — same OverflowError every time:
| Case |
Payload |
Result |
{"id": 2147483647} (int32 max) |
direct |
✅ |
{"id": 2147483648} (int32 max + 1) |
direct |
❌ |
{"id": 36704707556755} (real vendor order_id) |
direct |
❌ |
Nested: {"refs": [{"args": {"order_id": 36704707556755}}]} |
direct |
❌ |
JSONB({"id": 2147483648}) |
wrapper init |
❌ (raises before execute) |
Expected
PostgreSQL jsonb stores numbers as arbitrary-precision text; any int64 (or larger) value should round-trip without error. psqlpy already proves it can marshal int64 correctly when targeting a bigint column with BigInt(...), so the same range should be supported inside nested jsonb payloads.
Actual
OverflowError is raised at the Python→Rust boundary, suggesting the dict → jsonb marshalling path defaults to i32 instead of i64 (or arbitrary precision).
Real-world impact
Hit this in production while inserting LLM / tool-call results into a jsonb audit-log table. The tool returned a vendor-issued order_id = 36704707556755 (well within int64, normal for snowflake / timestamp-based IDs), and the insert silently failed via a try/except that only logged str(e) — which turned into an undetected data-loss bug for hours until upstream consumers timed out.
Workaround
Pass the JSON as text and cast in SQL:
await conn.execute(
"INSERT INTO t (data) VALUES ($1::text::jsonb)",
[json.dumps(payload)], # any int size now safe
)
This bypasses psqlpy's dict marshalling and lets PostgreSQL parse the jsonb itself.
Suggested fix
Treat integers inside dict → jsonb (and inside JSONB(...)) as i64 by default, or fall back to arbitrary-precision (numeric) when the value exceeds i64. Python int is unbounded, so any silent narrowing is surprising.
Versions
psqlpy 0.11.12
- PostgreSQL 16
- Python 3.13.9
Summary
psqlpyraisesOverflowError: out of range integral type conversion attemptedwhen a Pythondictcontaining integer values larger thanint32(2,147,483,647) is passed to ajsonbcolumn, or wrapped viapsqlpy.extra_types.JSONB. PostgreSQLjsonbitself imposes no such limit, andpsqlpyalready supportsint64forbigintcolumns viaBigInt(...).Reproduction
Tested with all of the following — same
OverflowErrorevery time:{"id": 2147483647}(int32 max){"id": 2147483648}(int32 max + 1){"id": 36704707556755}(real vendor order_id){"refs": [{"args": {"order_id": 36704707556755}}]}JSONB({"id": 2147483648})execute)Expected
PostgreSQL
jsonbstores numbers as arbitrary-precision text; anyint64(or larger) value should round-trip without error.psqlpyalready proves it can marshalint64correctly when targeting abigintcolumn withBigInt(...), so the same range should be supported inside nestedjsonbpayloads.Actual
OverflowErroris raised at the Python→Rust boundary, suggesting thedict → jsonbmarshalling path defaults toi32instead ofi64(or arbitrary precision).Real-world impact
Hit this in production while inserting LLM / tool-call results into a jsonb audit-log table. The tool returned a vendor-issued
order_id = 36704707556755(well withinint64, normal for snowflake / timestamp-based IDs), and the insert silently failed via atry/exceptthat only loggedstr(e)— which turned into an undetected data-loss bug for hours until upstream consumers timed out.Workaround
Pass the JSON as text and cast in SQL:
This bypasses
psqlpy's dict marshalling and lets PostgreSQL parse the jsonb itself.Suggested fix
Treat integers inside
dict → jsonb(and insideJSONB(...)) asi64by default, or fall back to arbitrary-precision (numeric) when the value exceedsi64. Pythonintis unbounded, so any silent narrowing is surprising.Versions
psqlpy0.11.12