Fix perlin/terrain dask backends: enable parallelism and out-of-core support#870
Merged
brendancol merged 3 commits intomasterfrom Feb 24, 2026
Merged
Fix perlin/terrain dask backends: enable parallelism and out-of-core support#870brendancol merged 3 commits intomasterfrom
brendancol merged 3 commits intomasterfrom
Conversation
Pass chunks= to da.linspace so coordinate arrays match the input data's chunk structure. Without this, da.linspace created single-chunk arrays, making da.map_blocks process everything as one block. Fixes #869
Same issue as perlin(): da.linspace was called without chunks=, producing single-chunk coordinate arrays and negating any parallelism from da.map_blocks. Refs #869
The normalization `(data - da.min(data)) / da.ptp(data)` creates a diamond dependency in the task graph: every source block feeds both the global reduction and the final elementwise op. The scheduler cannot release any block until both paths complete, so all blocks must be in memory simultaneously — OOM for large-than-memory inputs. Fix by computing reductions in a separate pass via dask.compute(), producing concrete scalars before building the elementwise graph. Each block can then be processed and released independently. Also in terrain.py: - Replace np.min/np.ptp with da.min/da.ptp (explicit dask ops instead of relying on __array_function__ dispatch) - Replace data[data < 0.3] = 0 with da.where (dask-native) Refs #869
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
da.linspacewithoutchunks=in bothperlin.pyandterrain.pyproduced single-chunk coordinate arrays, soda.map_blocksprocessed everything as one block — no actual parallelism(data - da.min(data)) / da.ptp(data)) forced the scheduler to hold all source blocks in memory simultaneously, making the dask path OOM on larger-than-memory inputsterrain.pyusednp.min/np.ptpinstead ofda.min/da.ptp, relying on fragile__array_function__dispatch, and used setitem (data[data < 0.3] = 0) which isn't dask-nativeFixes
chunks=toda.linspacematching the input data's chunk structure soda.map_blocksdistributes work across blocksdask.compute(da.min(data), da.ptp(data)), then normalize with concrete scalars — breaks the diamond dependency so blocks can be processed and released independentlynp.min/np.ptpwith explicitda.min/da.ptpanddata[data < 0.3] = 0withda.whereFixes #869
Test plan
test_perlin_cpu,test_perlin_dask_cpu,test_perlin_gpupasstest_terrain_cpu,test_terrain_dask_cpu,test_terrain_gpupass