Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into readers.numpy
Browse files Browse the repository at this point in the history
  • Loading branch information
hobu committed Mar 28, 2018
2 parents 4814859 + d7d129d commit f963b29
Show file tree
Hide file tree
Showing 10 changed files with 54 additions and 14 deletions.
3 changes: 3 additions & 0 deletions doc/apps/info.rst
Expand Up @@ -13,6 +13,8 @@ Displays information about a point cloud file, such as:
* the plain text format should be reStructured text if possible to allow a user
to retransform the output into whatever they want with ease

Processing is performed with stream mode if possible.

::

$ pdal info <input>
Expand All @@ -27,6 +29,7 @@ Displays information about a point cloud file, such as:
--stats Dump stats on all points (reads entire dataset)
--boundary Compute a hexagonal hull/boundary of dataset
--dimensions Dimensions on which to compute statistics
--enumerate Dimensions whose values should be enumerated
--schema Dump the schema
--pipeline-serialization Output filename for pipeline serialization
--summary Dump summary of the info
Expand Down
6 changes: 3 additions & 3 deletions doc/apps/pipeline.rst
Expand Up @@ -4,8 +4,9 @@
pipeline
********************************************************************************

The ``pipeline`` command is used to execute :ref:`pipeline` JSON. See
:ref:`reading` or :ref:`pipeline` for more information.
The ``pipeline`` command is used to execute :ref:`pipeline` JSON. The pipeline
is run in stream mode if possible. See :ref:`reading` or :ref:`pipeline` for
more information.

::

Expand All @@ -21,7 +22,6 @@ The ``pipeline`` command is used to execute :ref:`pipeline` JSON. See
progress information. The file/FIFO must exist. PDAL will not create the
progress file.
--stdin, -s Read pipeline from standard input
--stream Attempt to run pipeline in streaming mode.
--metadata Metadata filename


Expand Down
2 changes: 1 addition & 1 deletion doc/apps/translate.rst
Expand Up @@ -6,7 +6,7 @@ translate

The ``translate`` command can be used for simple conversion of files based on
their file extensions. It can also be used for constructing pipelines directly
from the command-line.
from the command-line. Processing is done with stream mode if possible.

::

Expand Down
13 changes: 11 additions & 2 deletions doc/faq.rst
Expand Up @@ -12,7 +12,7 @@ FAQ
pronounced to rhyme with "GDAL".

.. it is properly pronounced like the dog though :)
|
* Why do I get the error "Couldn't create ... stage of type ..."?

In almost all cases this error occurs because you're trying to run a stage
Expand All @@ -36,6 +36,15 @@ FAQ

.. index:: PCL

* Why am I using 100GB of memory when trying to process a 10GB LAZ file?

If you're performing an operation that is using
:ref:`standard mode <processing_modes>`, PDAL will read all points into
memory at once. Compressed files, like LAZ, can decompress to much larger
sizes before PDAL can process the data. Furthermore, some operations
(notably :ref:`DEM creation<writers.gdal>`) can use large amounts of
additional memory during processing before the output can be written.
|
* What is PDAL's relationship to PCL?

PDAL is PCL's data translation cousin. PDAL is focused on providing a
Expand All @@ -55,7 +64,7 @@ FAQ
with LASlib and LAStools. PDAL, on the other hand, aims to be
a ultimate library and a set of tools for manipulating and processing
point clouds and is easily extensible by its users.

|
* Are there any command line tools in PDAL similar to LAStools?

Yes. The ``pdal`` command provides a wide range of features which go
Expand Down
23 changes: 23 additions & 0 deletions doc/pipeline.rst
Expand Up @@ -111,6 +111,29 @@ with the :ref:`writers.gdal` writer:
.. _`UTM`: http://spatialreference.org/ref/epsg/nad83-utm-zone-16n/
.. _`Geographic`: http://spatialreference.org/ref/epsg/4326/

.. _processing_modes:

Processing Modes
--------------------------------------------------------------------------------

PDAL process data in one of two ways: standard mode or stream mode. With
standard mode, all input is read into memory before it is processed. Many
algorithms require standard mode processing because they need access to
all points. Operations that do sorting or require neighbors of points, for
example, require access to all points.

For operations that don't require access to all points, PDAL provides stream
mode. Stream mode processes points through a pipeline in chunks, which
reduces memory requirements.

When using :ref:`pdal info<info_command>`,
:ref:`pdal translate<translate_command>` or
:ref:`pdal pipeline<pipeline_command>`
PDAL uses stream mode if possible. If stream mode can't use used
the applications fall back to standard mode processing.

Users of the PDAL API can explicitly control the selection of the PDAL
processing mode.

Pipeline Objects
--------------------------------------------------------------------------------
Expand Down
11 changes: 6 additions & 5 deletions filters/NeighborClassifierFilter.cpp
Expand Up @@ -112,18 +112,19 @@ void NeighborClassifierFilter::doOneNoDomain(PointRef &point, PointRef &temp,
double thresh = iSrc.size()/2.0;

// vote NNs
std::map<double, unsigned int> counts;
using CountMap = std::map<int, unsigned int>;
CountMap counts;
//std::map<int, unsigned int> counts;
for (PointId id : iSrc)
{
temp.setPointId(id);
double votefor = temp.getFieldAs<double>(m_dim);
counts[votefor]++;
counts[temp.getFieldAs<int>(m_dim)]++;
}

// pick winner of the vote
auto pr = *std::max_element(counts.begin(), counts.end(),
[](const std::pair<int, int>& p1, const std::pair<int, int>& p2) {
return p1.second < p2.second; });
[](CountMap::const_reference p1, CountMap::const_reference p2)
{ return p1.second < p2.second; });

// update point
auto oldclass = point.getFieldAs<double>(m_dim);
Expand Down
4 changes: 4 additions & 0 deletions kernels/InfoKernel.cpp
Expand Up @@ -140,6 +140,8 @@ void InfoKernel::addSwitches(ProgramArgs& args)
m_boundary);
args.add("dimensions", "Dimensions on which to compute statistics",
m_dimensions);
args.add("enumerate", "Dimensions whose values should be enumerated",
m_enumerate);
args.add("schema", "Dump the schema", m_showSchema);
args.add("pipeline-serialization", "Output filename for pipeline "
"serialization", m_pipelineFile);
Expand Down Expand Up @@ -310,6 +312,8 @@ void InfoKernel::setup(const std::string& filename)
Options filterOptions;
if (m_dimensions.size())
filterOptions.add({"dimensions", m_dimensions});
if (m_enumerate.size())
filterOptions.add({"enumerate", m_enumerate});
m_statsStage = &m_manager.makeFilter("filters.stats", *stage,
filterOptions);
stage = m_statsStage;
Expand Down
1 change: 1 addition & 0 deletions kernels/InfoKernel.hpp
Expand Up @@ -82,6 +82,7 @@ class PDAL_DLL InfoKernel : public Kernel
bool m_boundary;
std::string m_pointIndexes;
std::string m_dimensions;
std::string m_enumerate;
std::string m_queryPoint;
std::string m_pipelineFile;
bool m_showSummary;
Expand Down
4 changes: 2 additions & 2 deletions kernels/PipelineKernel.cpp
Expand Up @@ -91,7 +91,7 @@ void PipelineKernel::addSwitches(ProgramArgs& args)
args.add("pointcloudschema", "dump PointCloudSchema XML output",
m_PointCloudSchemaOutput).setHidden();
args.add("stdin,s", "Read pipeline from standard input", m_usestdin);
args.add("stream", "Attempt to run pipeline in streaming mode.", m_stream);
args.add("stream", "This option is obsolete.", m_stream);
args.add("metadata", "Metadata filename", m_metadataFile);
}

Expand Down Expand Up @@ -132,7 +132,7 @@ int PipelineKernel::execute()
}

m_manager.readPipeline(m_inputFile);
if (m_stream)
if (m_manager.pipelineStreamable())
{
FixedPointTable table(10000);
m_manager.executeStream(table);
Expand Down
1 change: 0 additions & 1 deletion vendor/kazhdan/PoissonRecon.h
Expand Up @@ -497,7 +497,6 @@ void PoissonRecon<Real>::execute()
OctNode< TreeNodeData >::SetAllocator( MEMORY_ALLOCATOR_BLOCK_SIZE );
readData();

Real pointWeightSum;
calcDensity();
calcNormalData();
trim();
Expand Down

0 comments on commit f963b29

Please sign in to comment.