Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into issue-2024
Browse files Browse the repository at this point in the history
  • Loading branch information
hobu committed Aug 15, 2018
2 parents 97dcfc2 + 3931126 commit ec79094
Show file tree
Hide file tree
Showing 62 changed files with 2,013 additions and 445 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ Debug/
Release/
RelWithDebInfo/
ipch/
CMakeSettings.json
*.sln
*.vcxproj
*.vcxproj.filters
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ PDAL
[![Build Status](https://travis-ci.org/PDAL/PDAL.png?branch=master)](https://travis-ci.org/PDAL/PDAL)
[![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/6dehrm0v22cw58d3/branch/master?svg=true)](https://ci.appveyor.com/project/hobu/pdal)

See http://pdal.io/ for more info
See https://pdal.io/ for more info
5 changes: 5 additions & 0 deletions cmake/options.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,11 @@ option(BUILD_PLUGIN_MBIO
add_feature_info("MBIO plugin" BUILD_PLUGIN_MBIO
"add features that depend on MBIO")

option(BUILD_PLUGIN_FBX
"Choose if FBX support should be built" FALSE)
add_feature_info("FBX plugin" BUILD_PLUGIN_FBX
"add features that depend on FBX")

option(BUILD_TOOLS_NITFWRAP "Choose if nitfwrap tool should be built" FALSE)

option(WITH_TESTS
Expand Down
8 changes: 6 additions & 2 deletions cmake/unix_compiler_options.cmake
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
function(pdal_target_compile_settings target)
set_property(TARGET ${target} PROPERTY CXX_STANDARD 11)
set_property(TARGET ${target} PROPERTY CXX_STANDARD_REQUIRED TRUE)
if (NOT ${CMAKE_VERSION} VERSION_LESS 3.1)
set_property(TARGET ${target} PROPERTY CXX_STANDARD 11)
set_property(TARGET ${target} PROPERTY CXX_STANDARD_REQUIRED TRUE)
else()
set(PDAL_CXX_STANDARD "-std=c++11")
endif()
if (${CMAKE_CXX_COMPILER_ID} MATCHES "GNU")
#
# VERSION_GREATER_EQUAL doesn't come until cmake 3.7
Expand Down
72 changes: 72 additions & 0 deletions doc/apps/tile.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
.. _tile_command:

********************************************************************************
tile
********************************************************************************

The ``tile`` command will create multiple output files from input files
by generating square tiles of points. The command takes an input
file name and an output filename template.

This command is similar to the :ref:`split <split_command>` command, but
differs in several ways. The ``tile`` command:

- Uses streaming mode to limit the amount of memory consumed by point data.
- Uses a placeholder for filename output.
- Provides for reprojection of data to create consistent output.
- Always creates square tiles that contain all points "covered" by each tile.

::

$ pdal tile <input> <output>

::

--input, -i Input filename
--output, -o Output filename
--length Edge length for cells [Default: 1000]
--origin_x Origin in X axis for cells [Default: None]
--origin_y Origin in Y axis for cells [Default: None]
--buffer Size of buffer (overlap) to include around each tile.
[Default: 0]
--out_srs Spatial reference system to which all input points
will be reprojected. [Default: None]

The input filename can contain a `glob pattern`_ to allow multiple files
as input.

The output filename must contain a placeholder character ``#``. The
placeholder character is replaced with an X/Y index of the tile as a part
of a cartesian system. For example, if the output filename is specified as
``out#.las``, the tile containing the origin will be named ``out0_0.las``.
The tile to its right will be named ``out1_0.las``. The tile above it
will be named ``out0_1.las``. The command does not create directories --
create any desired directories before running.

If an origin is not supplied with as argument, the first point read is
used as the origin.

Example 1:
--------------------------------------------------------------------------------

::

$ pdal tile infile.laz "outfile_#.bpf"

This command takes the points from the input file ``infile.laz`` and creates
output files ``outfile_0_0.bpf``, ``outfile_0_1.bpf``, ... where each output
file contains points in the 1000x1000 square units represented by the tile.
The X/Y location of the first point is used as the origin of the tile grid.

Example 2:
--------------------------------------------------------------------------------

::

$ pdal tile "/home/me/files/*" "out_#.txt" --out_srs="EPSG:4326"

Reads all files in the directory /home/me/files as input and reprojects
points to geographic coordinates if necessary. The output is written to
a set of text files in the current directory.

.. _glob pattern: https://en.wikipedia.org/wiki/Glob_%28programming%29
3 changes: 2 additions & 1 deletion doc/stages/readers.faux.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,8 @@ bounds
form "([xmin,xmax],[ymin,ymax],[zmin,zmax])". [Default: unit cube]

count
How many synthetic points to generate before finishing? [Required]
How many synthetic points to generate before finishing? [Required, except
when mode is 'grid']

mean_x|y|z
Mean value in the x, y, or z dimension respectively. (Normal mode only)
Expand Down
170 changes: 84 additions & 86 deletions doc/stages/readers.numpy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,26 @@ extension ``.npy``. As of PDAL 1.7.0, ``.npz`` files were not yet supported.

.. warning::

It is untested whether the version of Python PDAL was linked against and
the version that saved the ``.npy`` files can be mixed.
It is untested whether problems may occur if the versions of Python used
in writing the file and for reading the file don't match.

Array Types
--------------------------------------------------------------------------------

:ref:`readers.numpy` supports reading data in two forms:

* Arrays as named fields all of the same shape (from `laspy`_ for example)
* 2-dimensional arrays
* As a `structured array`_ with specified field names (from `laspy`_ for
example)
* As a standard array that contains data of a single type.


Named Field Arrays

Structured Arrays
................................................................................

`laspy`_ provides its ``.points`` Numpy array as a bunch of named fields:
Numpy arrays can be created as structured data, where each entry is a set
of fields. Each field has a name. As an example, `laspy`_ provides its
``.points`` as an array of named fields:

::

Expand All @@ -37,23 +41,20 @@ Named Field Arrays
::

array([ ((63608330, 84939865, 40735, 65, 73, 1, -11, 126, 7326, 245385.60820904),)],
dtype=[('point', [('X', '<i4'), ('Y', '<i4'), ('Z', '<i4'), ('intensity', '<u2'), ('flag_byte', 'u1'), ('raw_classification', 'u1'), ('scan_angle_rank', 'i1'), ('user_data', 'u1'), ('pt_src_id', '<u2'), ('gps_time', '<f8')])])
dtype=[('point', [('X', '<i4'), ('Y', '<i4'), ('Z', '<i4'), ('intensity', '<u2'), ('flag_byte', 'u1'), ('raw_classification', 'u1'), ('scan_angle_rank', 'i1'), ('user_data', 'u1'), ('pt_src_id', '<u2'), ('gps_time', '<f8')])])

:ref:`readers.numpy` supports reading these Numpy arrays and mapping applicable
names to :ref:`dimensions` names. It will try to remove ``_``, ``-``, and ``space`` from
the field name and use that as a dimension name if it can match. Types are also
preserved when mapped to PDAL.
:ref:`readers.numpy` supports reading these Numpy arrays and mapping
field names to standard PDAL :ref:`dimension <dimensions>` names.
If that fails, the reader retries by removing ``_``, ``-``, or ``space``
in turn. If that also fails, the array field names are used to create
custom PDAL dimensions.


Two-dimensional Arrays
Standard (non-structured) Arrays
................................................................................

Typical two-dimensional `Numpy`_ arrays are also supported, with options to allow
you to map the values in the cells using the ``dimension`` option. Additionally,
you can override the `Z` value for the entire array by using the ``assign_z``
option to set a single `Z` value for the entire point cloud. Mapping the values to the
``Z`` dimension using the ``dimension`` option is also allowed.

Arrays without field information contain a single datatype. This datatype is
mapped to a dimension specified by the ``dimension`` option.

::

Expand All @@ -66,78 +67,72 @@ option to set a single `Z` value for the entire point cloud. Mapping the values
data.dtype
dtype('float64')


In this case, the cell locations are mapped to X and Y dimensions, the cell
values are mapped to ``Intensity`` using the ``dimension`` option, and the Z
values are assigned to 4 using the ``assign_z`` option.

::

pdal info perlin.npy --readers.numpy.dimension=Intensity --readers.numpy.assign_z=4

::

{
"filename": "perlin.npy",
"pdal_version": "1.6.0 (git-version: 897afd)",
"filename": "..\/test\/data\/plang\/perlin.npy",
"pdal_version": "1.7.1 (git-version: 399e19)",
"stats":
{
"statistic":
[
{
"average": 49.995,
"count": 10000,
"kurtosis": -1.201226882,
"maximum": 100,
"minimum": 0,
"name": "X",
"position": 0,
"skewness": -0.0001281084091,
"stddev": 29.16793715,
"variance": 850.7685575
},
{
"average": 50,
"count": 10000,
"kurtosis": -1.1996846,
"maximum": 100,
"minimum": 0,
"name": "Y",
"position": 1,
"skewness": -8.69273658e-05,
"stddev": 28.87401021,
"variance": 833.7084657
},
{
"average": 4,
"count": 10000,
"kurtosis": 9997,
"maximum": 4,
"minimum": 4,
"name": "Z",
"position": 2,
"skewness": 1.844674407e+21,
"stddev": 0.04000200015,
"variance": 0.001600160016
},
{
"average": 0.01112664759,
"count": 10000,
"kurtosis": -0.5634013693,
"maximum": 0.5189296418,
"minimum": -0.5189296418,
"name": "Intensity",
"position": 3,
"skewness": -0.1127124452,
"stddev": 0.2024120437,
"variance": 0.04097063545
}
]
"statistic":
[
{
"average": 49.5,
"count": 10000,
"maximum": 99,
"minimum": 0,
"name": "X",
"position": 0,
"stddev": 28.86967866,
"variance": 833.4583458
},
{
"average": 49.5,
"count": 10000,
"maximum": 99,
"minimum": 0,
"name": "Y",
"position": 1,
"stddev": 28.87633116,
"variance": 833.8425015
},
{
"average": 0.01112664759,
"count": 10000,
"maximum": 0.5189296418,
"minimum": -0.5189296418,
"name": "Intensity",
"position": 2,
"stddev": 0.2024120437,
"variance": 0.04097063545
}
}
]
}
}


X, Y and Z Mapping
................................................................................
Unless the X, Y or Z dimension is specified as a field in a structured array,
the reader will create dimensions X, Y and Z as necessary and populate them
based on the position of each item of the array. Although Numpy arrays always
contain contiguous, linear data, that data can be seen to be arranged in more
than one dimension. A two-dimensional array will cause dimensions X and Y
to be populated. A three dimensional array will cause X, Y and Z to be
populated. An array of more than three dimensions will reuse the X, Y and Z
indices for each dimension over three.

When reading data, X Y and Z can be assigned using row-major (C) order or
column-major (Fortran) order by using the ``order`` option.


.. _`Numpy`: http://www.numpy.org/
.. _`laspy`: https://github.com/laspy/laspy
.. _`structured array`: https://docs.scipy.org/doc/numpy/user/basics.rec.html

.. plugin::

Expand All @@ -149,19 +144,22 @@ Options
filename
npy file to read [Required]

count
Maximum number of points to read. [Default: unlimited]

dimension
Dimension name from :ref:`dimensions` to map raster values
x
Dimension number (starting from 0) to map to the ``X`` PDAL :ref:`dimension <dimensions>`

y
Dimension number (starting from 0) to map to the ``Y`` PDAL :ref:`dimension <dimensions>`
order
Either 'row' or 'column' to specify assigning the X,Y and Z values
in a row-major or column-major order. [Default: matches the natural
order of the array.]

z
Dimension number (starting from 0) to map to the ``Z`` PDAL :ref:`dimension <dimensions>`
.. note::
The functionality of the 'assign_z' option in previous versions is
provided with :ref:`filters.assign`

assign_z
A single value to override for ``Z`` values when ``dimension`` is used to assign the
Numpy values to another dimension
The functionality of the 'x', 'y', and 'z' options in previous versions
are generally handled with the current 'order' option.

.. _formatted: http://en.cppreference.com/w/cpp/string/basic_string/stof
17 changes: 11 additions & 6 deletions doc/stages/readers.text.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,20 +65,25 @@ Example Pipeline
Options
-------

count
Maximum number of points to read

filename
text file to read [Required]

separator
Separator character to override that found in header line.

header
String to use as the file header. All lines in the file as assumed to be
records containing point data unless skipped with the 'skip' option.
[Default: None]

separator
Separator character to override that found in header line. [Default: None]

skip
Number of lines to ignore at the beginning of the file.
Number of lines to ignore at the beginning of the file. [Default: 0]

count
Maximum number of points to read [Optional]
spatialreference
Spatial reference for the file data. Most text-based formats of
SRS information are accepted, including WKT and proj.4. [Default: None]

.. _formatted: http://en.cppreference.com/w/cpp/string/basic_string/stof
2 changes: 1 addition & 1 deletion filters/ClusterFilter.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@

#include "ClusterFilter.hpp"

#include <pdal/Segmentation.hpp>
#include "private/Segmentation.hpp"

#include <string>

Expand Down

0 comments on commit ec79094

Please sign in to comment.