Releases: meldig/pdal-parallelizer
V 2.1.0
-
Changing the code structure of the project and also the point clouds processing approach.
A point cloud is represented by a structured numpy array with different fields for each point : X coordinate, Y coordinate, Z coordinate, Blue, Green, Red... The PDAL pipeline written by the user is decomposed in stages and each stage is executed on a numpy array representing a cloud (or a tile in case of single cloud processing).
-
Taking numpy array for each point cloud instead of an entire file.
-
The unmanaged memory generated by a task is now suppressed when the task is finished.
This new approach allows the user to process the number of tasks they want, regardless the number of workers.
V 2.0.3
- New merge_tiles option to merge all your tiles at the end of an execution. It takes into account compression, minor_version and dataformat_id specified in your tiles pipelines
- New remove_tiles option to delete all your tiles after merged
V 2.0.2
- Create temp and output folders if they don't exist
- Add a merge_tiles option to merge all the resulting chunks into one file
- Trigger warnings
- if the tile_size value is not changed
- if the number of workers exceed the existing number of CPU avalaible
- If the tile_size value exceeds the dimensions of the input cloud, it will now be adapted to the actual boundaries of the cloud.
V 2.0.1
Fix import bug. The API is now functional
V 2.0.0
New API for pdal-parallelizer :
- The process_pipelines function can now be used directly via Python
V 1.10.19
- fixing memory leaks
V 1.10.18
- Fixing a little bug in file_manager
V 1.10.17
- Fix little bugs in the split function
V 1.10.16
- dependencies : dask[distributed] -> distributed
V 1.10.15
- trigger garbage collector at the end of the execution
- adding the ClassFlags dimension if there is not
- buffer has now the wittheld flag