Skip to content

Commit

Permalink
clean up a bunch of sphinx doc warnings
Browse files Browse the repository at this point in the history
  • Loading branch information
hobu committed Sep 15, 2015
1 parent 6a9112f commit e8db413
Show file tree
Hide file tree
Showing 15 changed files with 116 additions and 96 deletions.
4 changes: 2 additions & 2 deletions doc/docs.rst
Expand Up @@ -12,17 +12,17 @@ Contents:

download
compilation/index
commandline/index
apps
stages/index
pipeline
vagranttutorial
vagrant
community
faq
development/index
tutorial/index
api/index
metadata
vagrant
copyright

Indices and tables
Expand Down
2 changes: 1 addition & 1 deletion doc/stages/filters.ferry.rst
@@ -1,7 +1,7 @@
.. _filters.ferry:

filters.ferry
============
================================================================================

The ferry filter is used to stash intermediate variables as part of
processing data. For example, a common scenario is to keep both the
Expand Down
4 changes: 2 additions & 2 deletions doc/stages/filters.merge.rst
@@ -1,13 +1,13 @@
.. _filters.merge:

filters.merge
============
===============================================================================

The merge filter combines input from multiple sources into a single output.
No checks are made to ensure that points from the various sources have similar
dimensions or are generally compatible. Notably, dimensions are not
initialized when points merged from various sources do not have dimensions in
common.
common.

Example
-------
Expand Down
2 changes: 1 addition & 1 deletion doc/stages/filters.mortonorder.rst
@@ -1,7 +1,7 @@
.. _filters.mortonorder:

filters.mortonorder
===========
================================================================================

Sorts the XY data using `Morton ordering`_.

Expand Down
8 changes: 4 additions & 4 deletions doc/stages/filters.splitter.rst
@@ -1,7 +1,7 @@
.. _filters.splitter:

filters.splitter
===============
===============================================================================

The splitter filter breaks a point cloud into square tiles of a size that
you choose. The origin of the tiles is chosen arbitrarily unless specified
Expand All @@ -12,7 +12,7 @@ for each tile as its output.

Splitting is usually applied to data read from files (which produce one large
stream of points) before the points are written to a database (which prefer
data segmented into smaller blocks).
data segmented into smaller blocks).

Example
-------
Expand Down Expand Up @@ -40,10 +40,10 @@ Options
length
Length of the sides of the tiles that are created to hold points.
[Default: 1000]

origin_x
X Origin of the tiles. [Default: none (chosen arbitarily)]

origin_y
Y Origin of the tiles. [Default: none (chosen arbitarily)]

4 changes: 2 additions & 2 deletions doc/stages/filters.stats.rst
@@ -1,7 +1,7 @@
.. _filters.stats:

filters.stats
============
===============================================================================

The stats filter calculates the minimum, maximum and average (mean) values
of dimensions. On request it will also provide an enumeration of values of
Expand Down Expand Up @@ -41,7 +41,7 @@ Options
dimensions
A comma-separated list of dimensions whose statistics should be
processed. If not provided, statistics for all dimensions are calculated.

enumerate
A comma-separated list of dimensions whose values should be enumerated.
Note that this list does not add to the list of dimensions that may be
Expand Down
4 changes: 2 additions & 2 deletions doc/stages/readers.buffer.rst
Expand Up @@ -3,7 +3,7 @@
readers.buffer
==============

The `readers.buffer`_ stage is a special stage that allows
The :ref:`readers.buffer`_ stage is a special stage that allows
you to read data from your own PointView rather than
fetching the data from a specific reader. In the :ref:`writing` example,
it is used to take a simple listing of points and turn them into an
Expand All @@ -13,7 +13,7 @@ LAS file.
Example
-------

See :ref:`writing` for an example usage scenario for `readers.buffer`.
See :ref:`writing` for an example usage scenario for :ref:`readers.buffer`.

Options
-------
Expand Down
8 changes: 5 additions & 3 deletions doc/stages/readers.geowave.rst
@@ -1,9 +1,11 @@
.. _readers.geowave:

readers.geowave
============
===============================================================================

The **GeoWave reader** uses `GeoWave`_ to read from Accumulo. GeoWave entries are stored using EPSG:4326. Instructions for configuring the GeoWave plugin can be found `here`_
The **GeoWave reader** uses `GeoWave`_ to read from Accumulo. GeoWave entries
are stored using EPSG:4326. Instructions for configuring the GeoWave plugin
can be found `here`_

Example
-------
Expand Down Expand Up @@ -57,7 +59,7 @@ pointsPerEntry
Sets the maximum number of points per Accumulo entry when using FeatureCollectionDataAdapter. [Default: 5000u]

bounds
The extent of the bounding rectangle to use to query points, expressed as a string, eg: “([xmin,xmax],[ymin,ymax],[zmin,zmax])”. [Default: unit cube]
The extent of the bounding rectangle to use to query points, expressed as a string, eg: “([xmin,xmax],[ymin,ymax],[zmin,zmax])”. [Default: unit cube]


.. _GeoWave: https://ngageoint.github.io/geowave/
Expand Down
24 changes: 12 additions & 12 deletions doc/stages/writers.las.rst
Expand Up @@ -35,7 +35,7 @@ filename
the placeholder will be replaced with an incrementing integer. If no
placeholder is found, all PointViews provided to the writer are
aggregated into a single file for output. Multiple PointViews are usually
the result of using :ref:`filters.splitter` or :ref:`filters.chipper`.
the result of using :ref:`filters.splitter` or :ref:`filters.chipper`.
[Required]

forward
Expand Down Expand Up @@ -65,7 +65,7 @@ forward
output file as necessary. Unlike header values, VLRs from multiple input
files are accumulated and each is written to the output file. Forwarded
VLRs may contain duplicate User ID/Record ID pairs.

minor_version
All LAS files are version 1, but the minor version (0 - 4) can be specified
with this option. [Default: 3]
Expand All @@ -77,32 +77,32 @@ software_id
creation_doy
Number of the day of the year (January 1 == 0, Dec 31 == 365) this file is
being created.

creation_year
Year (Gregorian) this file is being created.

dataformat_id
Controls whether information about color and time are stored with the point
information in the LAS file. [Default: 3]

* 0 == no color or time stored
* 1 == time is stored
* 2 == color is stored
* 3 == color and time are stored
* 3 == color and time are stored
* 4 [Not Currently Supported]
* 5 [Not Currently Supported]
* 6 == time is stored (version 1.4+ only)
* 7 == time and color are stored (version 1.4+ only)
* 8 == time, color and near infrared are stored (version 1.4+ only)
* 9 [Not Currently Supported]
* 10 [Not Currently Supported]

system_id
String identifying the system that created this LAS file. [Default: "PDAL"]

a_srs
The spatial reference system of the file to be written. Can be an EPSG string (eg "EPSG:268910") or a WKT string. [Default: Not set]

global_encoding
Various indicators to describe the data. See the LAS documentation. Note
that PDAL will always set bit four when creating LAS version output.
Expand Down Expand Up @@ -131,7 +131,7 @@ offset_x, offset_y, offset_z
dimension. [Default: 0]

Note: written value = (nominal value - offset) / scale.

filesource_id
The file source id number to use for this file (a value between
1 and 65535) [Default: 0]
Expand All @@ -145,11 +145,11 @@ extra_dims
Extra dimensions to be written as part of each point beyond those specified
by the LAS point format. The format of the option is
<dimension_name>=<type>, ... where type is one of:
int8, int16, int32, int64, uint8, uint16, uint32, uint64, float, double
int8, int16, int32, int64, uint8, uint16, uint32, uint64, float, double
'_t' may be added to any of the type names as well (e.g., uint32_t). When
the version of the output file is specified as 1.4 or greater, an extra
bytes VLR (User ID: LASF_Spec, Record ID: 4), is created that describes the
extra dimensions specified by this option.

.. _LAS format: http://asprs.org/Committee-General/LASer-LAS-File-Format-Exchange-Activities.html

2 changes: 1 addition & 1 deletion doc/stages/writers.null.rst
@@ -1,4 +1,4 @@
.. _writers.text:
.. _writers.null:

writers.null
============
Expand Down
18 changes: 10 additions & 8 deletions doc/stages/writers.text.rst
Expand Up @@ -3,7 +3,9 @@
writers.text
============

The **text writer** writes out to a text file. This is useful for debugging or getting smaller files into an easily parseable format. The text writer supports both `GeoJson`_ and `CSV`_ output.
The **text writer** writes out to a text file. This is useful for debugging or
getting smaller files into an easily parseable format. The text writer
supports both `GeoJSON`_ and `CSV`_ output.


Example
Expand Down Expand Up @@ -33,27 +35,27 @@ filename
File to write to, or "STDOUT" to write to standard out [Required]

format
Output format to use. One of "geojson" or "csv". [Default: **csv**]
Output format to use. One of "geojson" or "csv". [Default: **csv**]

order
Comma-separated list of dimension names, giving the desired column order in the output file, for example "X,Y,Z,Red,Green,Blue". [Default: none]

keep_unspecified
Should we output any fields that are not specified in the dimension order? [Default: **true**]

jscallback
When producing GeoJSON, the callback allows you to wrap the data in a function, so the output can be evaluated in a <script> tag.

quote_header
When producing CSV, should the column header named by quoted? [Default: **true**]

newline
When producing CSV, what newline character should be used? (For Windows, "\\r\\n" is common.) [Default: **\\n**]

delimiter
When producing CSV, what character to use as a delimiter? [Default: **,**]
When producing CSV, what character to use as a delimiter? [Default: **,**]


.. _GeoJson: http://geojson.org
.. _GeoJSON: http://geojson.org
.. _CSV: http://en.wikipedia.org/wiki/Comma-separated_values

16 changes: 8 additions & 8 deletions doc/tutorial/liblas_to_pdal.rst
Expand Up @@ -68,9 +68,9 @@ pointer was used with libLAS:

.. code-block:: cpp
if (LAS_header == NULL) {
/* fail */
}
if (LAS_header == NULL) {
/* fail */
}
In general, PDAL will throw a ``pdal_error`` exception in case something
is wrong and it can't recover such in the case when the file can't be opened.
Expand All @@ -79,11 +79,11 @@ in ``try-catch`` block:

.. code-block:: cpp
try {
/* actual code */
} catch {
/* fail in your own way */
}
try {
/* actual code */
} catch {
/* fail in your own way */
}
Dataset properties
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorial/overview.rst
@@ -1,4 +1,4 @@
.. _overview
.. _overview:

******************************************************************************
PDAL Architecture Overview
Expand Down

0 comments on commit e8db413

Please sign in to comment.