Summary
xrspatial.mcda.combine.owa() builds a temporary stack of every weighted criterion via xr.concat along a new __mcda_criterion axis, then sorts that stack descending along axis 0. There is no size check, so a caller passing many criteria over a moderate raster will exhaust host memory before any error is raised.
Where
xrspatial/mcda/combine.py, function owa():
weighted_layers = []
for var_name in criteria.data_vars:
w = criterion_weights[var_name]
weighted_layers.append(criteria[var_name] * w * n)
# Stack, sort descending along criterion axis, apply order weights
stacked = xr.concat(weighted_layers, dim="__mcda_criterion")
sorted_data = _sort_descending(stacked.data, axis=0)
Memory cost
The stack holds every criterion as float64:
- bytes per pixel =
n_criteria * 8
- total =
n_criteria * H * W * 8
_sort_descending then runs -np.sort(-data, axis=0), which roughly doubles the working set during the sort.
Worked example
100 criteria over a 10000 x 10000 raster:
100 * 10000 * 10000 * 8 = 80 GB
The sort step roughly doubles that to ~160 GB peak. A 32 GB laptop will swap or OOM.
1000 criteria over a 4000 x 4000 raster:
1000 * 4000 * 4000 * 8 = 128 GB
Proposed fix
Add a working-memory guard at the top of owa() that estimates n_criteria * H * W * 8 and raises MemoryError if it exceeds 50% of available host RAM. The dask path is bounded per chunk, so skip the guard when the input data is dask-backed.
Same shape of guard as #1319 / #1361 / #1367 / #1369.
Audit reference
Recorded in the mcda security audit alongside #1311 (the NaN/Inf weight fix series), as "MEDIUM Cat 1: combine.owa stacks all criteria via xr.concat without size guard".
Summary
xrspatial.mcda.combine.owa()builds a temporary stack of every weighted criterion viaxr.concatalong a new__mcda_criterionaxis, then sorts that stack descending along axis 0. There is no size check, so a caller passing many criteria over a moderate raster will exhaust host memory before any error is raised.Where
xrspatial/mcda/combine.py, functionowa():Memory cost
The stack holds every criterion as float64:
n_criteria * 8n_criteria * H * W * 8_sort_descendingthen runs-np.sort(-data, axis=0), which roughly doubles the working set during the sort.Worked example
100 criteria over a 10000 x 10000 raster:
100 * 10000 * 10000 * 8 = 80 GBThe sort step roughly doubles that to ~160 GB peak. A 32 GB laptop will swap or OOM.
1000 criteria over a 4000 x 4000 raster:
1000 * 4000 * 4000 * 8 = 128 GBProposed fix
Add a working-memory guard at the top of
owa()that estimatesn_criteria * H * W * 8and raisesMemoryErrorif it exceeds 50% of available host RAM. The dask path is bounded per chunk, so skip the guard when the input data is dask-backed.Same shape of guard as #1319 / #1361 / #1367 / #1369.
Audit reference
Recorded in the mcda security audit alongside #1311 (the NaN/Inf weight fix series), as "MEDIUM Cat 1: combine.owa stacks all criteria via xr.concat without size guard".