Skip to content

Reject invalid SQLPutData lengths#182

Merged
davecramer merged 1 commit into
postgresql-interfaces:mainfrom
jarvis24young:reject-invalid-sqlputdata-length
May 10, 2026
Merged

Reject invalid SQLPutData lengths#182
davecramer merged 1 commit into
postgresql-interfaces:mainfrom
jarvis24young:reject-invalid-sqlputdata-length

Conversation

@jarvis24young
Copy link
Copy Markdown
Contributor

@jarvis24young jarvis24young commented May 6, 2026

Summary

This patch hardens SQLPutData() data-at-execution handling against invalid negative StrLen_or_Ind values.

PGAPI_PutData() previously treated any negative cbValue as a length and stored it in EXEC_used. That value is later used as the append offset for repeated SQLPutData() calls. A caller that supplies an invalid negative length can therefore leave a negative offset in the put-data state, and a subsequent positive chunk can reach the append path with that negative offset.

The updated fix keeps the error handling within the existing statement error-code set, per review feedback:

  • accepts the ODBC sentinel values SQL_NULL_DATA and SQL_DEFAULT_PARAM
  • keeps existing SQL_NTS handling for character data
  • rejects other negative lengths using the existing STMT_INVALID_ARGUMENT_NO statement error
  • treats SQL_DEFAULT_PARAM like SQL_NULL_DATA on the first put-data call so it does not fall through to allocation/copy logic with a negative length
  • rejects later chunks after a null/default indicator before the LO/non-LO split using the existing STMT_SEQUENCE_ERROR

This keeps invalid API input from becoming persistent statement state and prevents it from being reused as a buffer offset, without adding new statement error enums or changing the connection/environment diagnostic paths.

Tests

Added regression coverage to dataatexecution-test for two public ODBC API paths:

SQLPutData(hstmt, "bad", -10);

verifies SQL_ERROR with the existing HY024 diagnostic, and:

SQLPutData(hstmt, NULL, SQL_NULL_DATA);
SQLPutData(hstmt, "x", 1);

verifies SQL_ERROR with the existing HY010 diagnostic.

Verified in WSL with a clean build copy:

cd ~/psqlodbc-pr182-review
./bootstrap
./configure --with-unixodbc=__without_odbc_config \
  CFLAGS='-O1 -g -fno-omit-frame-pointer -I/usr/include/postgresql -DSQLCOLATTRIBUTE_SQLLEN -Wall'
make -j2
cd test
make reset-db LIBODBC=-lodbc
make exe/dataatexecution-test LIBODBC=-lodbc
ODBCSYSINI=. ODBCINSTINI=./odbcinst.ini ODBCINI=./odbc.ini ./reset-db < sampletables.sql
ODBCSYSINI=. ODBCINSTINI=./odbcinst.ini ODBCINI=./odbc.ini ./exe/dataatexecution-test

Observed output:

connected
Result set:
2
3
Parameter	Status
Fetching result sets for array bound (2 results expected)
1: Result set:
4
2: Result set:
5
Invalid SQLPutData length rejected
SQLPutData append after null rejected
disconnecting

@davecramer
Copy link
Copy Markdown
Contributor

This PR adds input validation to PGAPI_PutData() to reject invalid negative StrLen_or_Ind values that could corrupt
internal statement state. It's a defensive hardening fix.

What it does

  1. execute.c — Two validation checks added:
    - On first SQLPutData call: if cbValue is negative but not one of the ODBC sentinels (SQL_NULL_DATA,
    SQL_DEFAULT_PARAM), return HY090 error instead of blindly storing it as a length.
    - On subsequent calls (append path): if EXEC_used is already negative (null/default indicator), reject the append
    with HY020 instead of using a negative offset for memcpy.

  2. statement.h / statement.c — Adds two new error codes (STMT_INVALID_STRING_OR_BUFFER_LENGTH → HY090,
    STMT_ATTEMPT_TO_CONCATENATE_NULL_VALUE → HY020) with correct ODBC 2.x/3.x SQLSTATE mappings.

  3. test — Adds a regression test that calls SQLPutData(hstmt, "bad", -10) and verifies SQL_ERROR with SQLSTATE
    HY090.

Positives

  • Correct ODBC semantics: HY090 (invalid string/buffer length) and HY020 (attempt to concatenate null) are the right
    SQLSTATEs per the ODBC spec.
  • Minimal and focused: Only touches the paths that need validation.
  • Good test coverage: The test exercises the exact failure path and validates the diagnostic record.
  • Prevents real corruption: Without this fix, a negative cbValue stored in EXEC_used becomes a negative old_pos
    passed to memcpy(&buffer[old_pos], ...) — a potential out-of-bounds write.

Issues / Suggestions

  1. Enum ordering matters: The new enum values STMT_INVALID_STRING_OR_BUFFER_LENGTH and
    STMT_ATTEMPT_TO_CONCATENATE_NULL_VALUE are inserted between STMT_INVALID_ARGUMENT_NO and STMT_ROW_OUT_OF_RANGE.
    Since the enum is used as an index into the Statement_sqlstate[] array in statement.c, the array entry order must
    match exactly. The PR does this correctly — both the enum and the array are updated in the same relative position.
  2. SQL_DEFAULT_PARAM on first call: The code now allows SQL_DEFAULT_PARAM through to putlen = cbValue, and then
    *current_pdata->EXEC_used = putlen stores it. But the existing early-exit for SQL_NULL_DATA (if (cbValue ==
    SQL_NULL_DATA) { retval = SQL_SUCCESS; goto cleanup; }) doesn't have a corresponding early-exit for
    SQL_DEFAULT_PARAM. This means SQL_DEFAULT_PARAM (-2) will fall through to the malloc(putlen + 1) path with a
    negative size. This is a pre-existing bug, not introduced by this PR, but worth noting — the PR could add a similar
    early-exit for SQL_DEFAULT_PARAM or document why it's left as-is.
  3. Second-call check placement: The new guard if (*current_pdata->EXEC_used < 0) is placed before old_pos =
    *current_pdata->EXEC_used in the non-LO else branch. However, the LO (large object) branch above it does
    *current_pdata->EXEC_used += putlen without the same check. If someone calls SQLPutData with SQL_NULL_DATA on a
    large-object parameter and then calls again, the LO path would still proceed. This is likely fine in practice (the
    first call would have failed to open the LO fd), but for consistency the check could be placed before the LO/non-LO
    branch split.
  4. Test doesn't exercise HY020 path: The test only covers the HY090 (invalid length) path. A second test case that
    first sends SQL_NULL_DATA and then attempts a second SQLPutData with valid data would exercise the HY020 path. Not a
    blocker, but would improve coverage.
  5. Minor style: The error message strings are clear and descriptive. The func variable is already in scope for the
    error context. No issues here.

@jarvis24young jarvis24young force-pushed the reject-invalid-sqlputdata-length branch from d803993 to dc86a51 Compare May 8, 2026 10:22
@jarvis24young
Copy link
Copy Markdown
Contributor Author

Updated the patch based on the review feedback.

Changes made:

  • Removed the newly added statement error enums and Statement_sqlstate[] entries.
  • Reused existing statement diagnostics instead:
    • invalid negative SQLPutData() length now uses STMT_INVALID_ARGUMENT_NO (HY024)
    • appending after a NULL/default put-data indicator now uses STMT_SEQUENCE_ERROR (HY010)
  • Added first-call handling for SQL_DEFAULT_PARAM so it follows the same early-exit path as SQL_NULL_DATA and does not fall through to allocation/copy logic with a negative length.
  • Moved the negative EXEC_used guard before the large-object / non-large-object branch split, so both paths are protected consistently.
  • Added a second regression case for the append-after-SQL_NULL_DATA path.

I also updated the PR description to reflect the reused diagnostics and re-ran the WSL regression test:

ODBCSYSINI=. ODBCINSTINI=./odbcinst.ini ODBCINI=./odbc.ini ./exe/dataatexecution-test

connected
Result set:
2
3
Parameter	Status
Fetching result sets for array bound (2 results expected)
1: Result set:
4
2: Result set:
5
Invalid SQLPutData length rejected
SQLPutData append after null rejected
disconnecting

@davecramer davecramer merged commit 8f87808 into postgresql-interfaces:main May 10, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants