stream_order_dinf in xrspatial/hydro/stream_order_dinf.py allocates several full-grid working arrays on its eager numpy and cupy backends with no upfront budget check. A 50000x50000 raster asks for about 100 GB of host RAM before anything errors out.
Same pattern as #1318/#1319 for flow_accumulation, #1328/#1331 for stream_order_d8, and #1343 for stream_link_dinf. Hydro is safety-critical, so the same asymmetric guard applies: eager backends check, dask backends skip since per-tile allocations are already bounded by chunk size.
D-inf encodes one continuous downstream angle per cell (with two bracketing neighbors and proportional fractions), so there is no (8, H, W) per-neighbor weight buffer like in the MFD variant. The working set matches the d8 budget.
Allocations
_strahler_dinf_cpu (the worst case of the two CPU kernels) allocates:
| Array |
dtype |
bytes/px |
| order |
float64 |
8 |
| in_degree |
int32 |
4 |
| max_in |
float64 |
8 |
| cnt_max |
int32 |
4 |
| queue_r |
int64 |
8 |
| queue_c |
int64 |
8 |
| Total |
|
40 |
The dispatch wrapper also casts fd to float64 and builds an int8 stream_mask before calling the kernel. Conservative budget: 40 B/px CPU.
_stream_order_dinf_cupy allocates on the device:
| Array |
dtype |
bytes/px |
| angles_f64 |
float64 |
8 |
| stream_mask_i8 |
int8 |
1 |
| in_degree |
int32 |
4 |
| state |
int32 |
4 |
| order |
float64 |
8 |
| max_in |
float64 |
8 |
| cnt_max |
int32 |
4 |
| Total |
|
37 |
Plus the input fa_cp cast (8 B/px) on the device. Conservative budget: 40 B/px GPU.
Worked example
50000x50000 = 2.5e9 pixels. CPU peak working set:
Allocated before any sanity check runs.
Fix
Mirror PR #1331 / #1347: per-module _check_memory and _check_gpu_memory helpers, wired into the numpy and cupy dispatch in stream_order_dinf(). Leave dask alone. Add tests for oversize rejection, valid pass-through, dask bypass, dimensions in error message, and cupy oversize gating.
One fix per PR per the security-sweep policy. stream_order_mfd (#1349) gets its own.
stream_order_dinfinxrspatial/hydro/stream_order_dinf.pyallocates several full-grid working arrays on its eager numpy and cupy backends with no upfront budget check. A 50000x50000 raster asks for about 100 GB of host RAM before anything errors out.Same pattern as #1318/#1319 for
flow_accumulation, #1328/#1331 forstream_order_d8, and #1343 forstream_link_dinf. Hydro is safety-critical, so the same asymmetric guard applies: eager backends check, dask backends skip since per-tile allocations are already bounded by chunk size.D-inf encodes one continuous downstream angle per cell (with two bracketing neighbors and proportional fractions), so there is no
(8, H, W)per-neighbor weight buffer like in the MFD variant. The working set matches the d8 budget.Allocations
_strahler_dinf_cpu(the worst case of the two CPU kernels) allocates:The dispatch wrapper also casts
fdto float64 and builds an int8stream_maskbefore calling the kernel. Conservative budget: 40 B/px CPU._stream_order_dinf_cupyallocates on the device:Plus the input
fa_cpcast (8 B/px) on the device. Conservative budget: 40 B/px GPU.Worked example
50000x50000 = 2.5e9 pixels. CPU peak working set:
Allocated before any sanity check runs.
Fix
Mirror PR #1331 / #1347: per-module
_check_memoryand_check_gpu_memoryhelpers, wired into the numpy and cupy dispatch instream_order_dinf(). Leave dask alone. Add tests for oversize rejection, valid pass-through, dask bypass, dimensions in error message, and cupy oversize gating.One fix per PR per the security-sweep policy.
stream_order_mfd(#1349) gets its own.