Skip to content

Fix silent / misleading failures in paginated waterdata pagination#273

Draft
thodson-usgs wants to merge 6 commits into
DOI-USGS:mainfrom
thodson-usgs:fix-paginated-truncation-errors
Draft

Fix silent / misleading failures in paginated waterdata pagination#273
thodson-usgs wants to merge 6 commits into
DOI-USGS:mainfrom
thodson-usgs:fix-paginated-truncation-errors

Conversation

@thodson-usgs
Copy link
Copy Markdown
Collaborator

Summary

Found while reviewing the repo for high-value bugfixes. Two latent bugs in _walk_pages and the parallel pagination loop in get_stats_data (dataretrieval/waterdata/utils.py) cause silent or misleading failures whenever a paginated waterdata request is interrupted mid-walk.

Bug 1 — resp is stale when client.request() itself raises

while curr_url:
    try:
        resp = client.request(...)              # ← may raise ConnectionError, Timeout, …
        dfs.append(_get_resp_data(resp, geopd))
        curr_url = _next_req_url(resp)
    except Exception:
        error_text = _error_body(resp)          # ← uses *previous* page's resp
        logger.error("Request incomplete. %s", error_text)
        ...

When the network call raises before resp is reassigned, resp still references the previous successful page. The log message thus describes the wrong response, and _error_body() falls into resp.json() on a 200 OK body that wasn't an error envelope — which can itself raise and bury the original failure.

Bug 2 — no status-code check on mid-pagination responses

The initial request gate at line 615 reads if resp.status_code != 200: raise RuntimeError(_error_body(resp)). The pagination loop does not. So a 5xx response with a JSON error body that lacks numberReturned flows into _get_resp_data(), which silently returns an empty DataFrame; _next_req_url() finds no next link; and the loop exits with no error logged anywhere. The caller gets truncated data and never knows.

This affects every paginated waterdata getter: get_daily, get_continuous, get_monitoring_locations, get_time_series_metadata, get_combined_metadata, get_field_measurements, get_field_measurements_metadata, get_peaks, get_channel, get_reference_table, plus the stats getters (get_stats_por, get_stats_date_range).

Fix

Two-line change to each loop:

if resp.status_code != 200:
    raise RuntimeError(_error_body(resp))

…immediately after client.request(...) (mirrors the initial-request gate). And:

except Exception as e:
    logger.error("Request incomplete: %s", e)

…instead of _error_body(resp), so the actual exception text surfaces — works for both the network-error case (uses the exception) and the 5xx case (uses the RuntimeError(_error_body(resp)) we just raised on a fresh resp).

The "best-effort" behavior is preserved: on failure the loop still logs and returns whatever pages were collected. The change is purely to make the failure observable and the log message accurate.

Test plan

  • test_walk_pages_logs_actual_exception_when_request_raises — simulates a ConnectionError mid-pagination, asserts the logged error reflects the actual exception ("boom") rather than something derived from a stale prior-page response.
  • test_walk_pages_surfaces_5xx_mid_pagination — page 2 returns HTTP 503 with a JSON body, asserts the 503 / ServiceUnavailable shows up in the error log. Previously this was silently swallowed.
  • Existing test_walk_pages_multiple_mocked (happy path) still passes.
  • Full mocked test suite: 278 passed, ruff + format clean.

🤖 Generated with Claude Code

thodson-usgs and others added 2 commits May 8, 2026 16:44
Two bugs in _walk_pages (and the parallel get_stats_data pagination
loop) caused silent or misleading failures when a paginated request was
interrupted mid-walk:

1. The except handler called _error_body(resp) — but when
   client.request() itself raised (ConnectionError, Timeout, etc.),
   `resp` still pointed at the *previous successful page*. The
   "incomplete" log message therefore described the wrong request, and
   on a non-JSON 200 body would itself raise inside resp.json() and
   bury the original failure.

2. No status-code check was performed on paginated responses. The
   initial request guards `if resp.status_code != 200`, but the loop
   doesn't. A 5xx body that didn't include "numberReturned" was
   silently turned into an empty DataFrame by _get_resp_data, the
   `next` link wasn't found, and the loop quietly exited — handing the
   user truncated data with no error logged anywhere. This affects
   every paginated waterdata getter (get_daily, get_continuous,
   get_monitoring_locations, …).

Fix:
- Guard the loop body with `if resp.status_code != 200: raise
  RuntimeError(_error_body(resp))`, mirroring the initial request.
- Capture the exception with `as e` and log `e` instead of touching the
  stale `resp`. Same change in get_stats_data.

Behavior preserved: on failure the loop still logs and returns
whatever pages were collected ("best effort"). The change is purely to
make the failure observable and the log message accurate.

Two new tests cover (a) network-exception mid-pagination and (b) 5xx
mid-pagination, asserting that the actual failure surfaces in the
error log.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The status-code guard appeared four times in waterdata/utils.py
(initial-request and pagination paths in both _walk_pages and
get_stats_data) with bit-identical bodies. Extract into a single
named helper; the helper's docstring carries the "silent-empty-frame"
WHY that the inline comment was apologizing for.

Also:
- trim the second multi-line comment in _walk_pages to a single line
  noting the actual constraint (`resp` may be stale)
- hoist `import logging` to module level in waterdata_utils_test.py
- condense the two new test docstrings to one-line contract statements;
  the bug history lives in the commit message

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up cleanups on top of the silent-truncation fix:

- New tests for the get_stats_data parallel pagination loop, mirroring
  the _walk_pages ConnectionError + 5xx pair (closes coverage gap from
  the prior commits).
- Lift the hard-coded logger-name string to a module-level _LOGGER_NAME
  constant derived from the utils module's __name__ (self-syncing).
- logger.error → logger.exception in both pagination except clauses so
  the traceback is automatically attached to the log record.
- _raise_for_non_200 docstring: drop the historical bug narration, add
  a one-line note distinguishing it from Response.raise_for_status
  (we route through _error_body for USGS-API-aware messages).
- Drop the inconsistent "resp may be stale" comment; the `as e` and
  variable naming already convey intent, and get_stats_data already
  sidesteps it by logging `url` instead of `resp.url`.
- _run_get_stats_data_with_failure: make monkeypatch a required arg
  (both callers pass it), and swap the lambda for MagicMock so future
  signature drift in _handle_stats_nesting isn't silently masked.
- Tighten test docstrings that narrated the PR/bug.
- Extract _error_log_messages(caplog) helper (used 4 times).
- Lift the conditional links ternary out of the dict literal in _resp_ok.
- NEWS.md: bump date to today.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes two failure modes in waterdata pagination where mid-walk interruptions could either (a) log misleading/stale error details when requests.Session.request() raises, or (b) silently truncate results when a later page returns a non-200 response.

Changes:

  • Added a shared _raise_for_non_200() helper and applied it to both the initial request and subsequent pagination requests in _walk_pages and get_stats_data.
  • Updated pagination exception handling to log the actual exception message rather than deriving an error from a potentially stale prior response.
  • Added regression tests for mid-pagination request exceptions and mid-pagination non-200 responses; documented the fixes in NEWS.md.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
dataretrieval/waterdata/utils.py Ensures every paginated page is status-checked and avoids stale-response error logging during pagination failures.
tests/waterdata_utils_test.py Adds targeted regression tests covering mid-pagination request exceptions and 5xx responses.
NEWS.md Documents the pagination bugfixes and their user-visible impact.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread dataretrieval/waterdata/utils.py
thodson-usgs and others added 2 commits May 14, 2026 21:13
…ion warning

Copilot noted that the get_stats_data pagination-failure warning logs
only the base statistics URL (constant across all pages), dropping the
next_token cursor that uniquely identifies which page in the sequence
failed. The original `resp.url` reference was abandoned because `resp`
may be stale when client.request() itself raises — but next_token is
the loop variable and is always current.

Add `(next_token=<value>)` to the warning, plus a focused test that
asserts the cursor surfaces in the WARNING-level log entry.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

Comment thread tests/waterdata_utils_test.py Outdated
Copilot PR DOI-USGS#273 review: the variable name `page2_500` is misleading
because the mocked response is a 503 (ServiceUnavailable), not a 500.
Renamed for clarity so future readers debugging a failure don't have
to mentally reconcile the name against the actual status code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

Comments suppressed due to low confidence (1)

dataretrieval/waterdata/utils.py:1152

  • Same as in _walk_pages: the PR description mentions switching to logger.error(...), but this code uses logger.exception(...) and will log a full traceback for mid-pagination failures. Please confirm that this additional verbosity is intended; otherwise align the logging call with the described behavior.
            except Exception as e:  # noqa: BLE001
                logger.exception("Request incomplete: %s", e)
                logger.warning(
                    "Request failed for URL: %s (next_token=%s). "

Comment on lines +110 to +111
"""Pull ERROR-and-above message strings out of caplog. The four
pagination-failure tests below all assert against the same shape."""
Comment on lines +685 to 688
except Exception as e: # noqa: BLE001
logger.exception("Request incomplete: %s", e)
logger.warning(
"Request failed for URL: %s. Data download interrupted.", curr_url
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants