Skip to content

Commit

Permalink
Merge pull request #46 from ssciwr/finalize-notebooks
Browse files Browse the repository at this point in the history
Finalize notebooks
  • Loading branch information
dokempf committed Sep 30, 2021
2 parents ccec6d5 + e0ef16a commit 7538c1f
Show file tree
Hide file tree
Showing 4 changed files with 344 additions and 159 deletions.
2 changes: 1 addition & 1 deletion adaptivefiltering/widgets.py
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ def _setter(_d):

def _construct_enum(self, schema, label=None, root=False):
# We omit trivial enums, but make sure that they end up in the result
if len(schema["enum"]) is 1:
if len(schema["enum"]) == 1:
return WidgetFormElement(
getter=lambda: schema["enum"][0], setter=lambda _: None, widgets=[]
)
Expand Down
240 changes: 240 additions & 0 deletions jupyter/datasets.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with LIDAR datasets in `adaptivefiltering`\n",
"\n",
"This notebook will explain how Lidar datasets are treated in `adaptivefiltering` by showcasing the most common use cases. If you are not yet familiar with Jupyter, check the [Introduction to Python+Jupyter notebook](python.ipynb) first."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first thing to do in a Jupyter notebook that uses `adaptivefiltering` is to import the library:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import adaptivefiltering"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Loading datasets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`adaptivefiltering` handles Lidar data sets in LAS/LAZ format. To load a data set, we construct a `DataSet` object given its filename and assign it to a variable `ds`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ds = adaptivefiltering.DataSet(filename=\"data/500k_NZ20_Westport.laz\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In above example, we are loading a small sample data set that is provided by `adaptivefiltering`. You can also load your own data set by providing its filename. `adaptivefiltering` currently only supports datasets in LAS and LAZ format. The dataset filename is assumed to either be an absolute path, be located in the current working directory or that you first specified its location using the `set_data_directory` function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"adaptivefiltering.set_data_directory(\"/some/directory\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Visualizing datasets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With the dataset loaded as the object `ds`, we have several ways of visualizing the data set directly in Jupyter. For a 2D visual representation of the surface, a hillshade model can be used: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds.show_hillshade(resolution=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `resolution` parameter specifies the spatial resolution in meters. Alternatively, a scatter plots or a 2.5D surface plot can be used for visualization:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds.show_points()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds.show_mesh(resolution=3)"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Restricting datasets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If your Lidar dataset is very large, handling the entire data set becomes unwieldy, especially if we want to interactively tune ground point filtering pipelines. It is therefore important to crop the dataset to a subset that we can easily work on. We do so by showing an interactive map, adding a polygon with the polygon selector tool and hitting the *Finalize* button:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rds = ds.restrict()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above, the restricted dataset is assigned to a new object `rds`. This follows a design principle of `adaptivefiltering`: All objects (datasets, filter pipelines etc.) are *immutable* - operations that work on datasets *never* implicitly modify an object. Instead the, provided input (`ds` in the above) is left untouched, and a modified copy is returned. This results in an increased memory consumption, but makes the interactive exploration of ground point filtering with `adaptivefiltering` easier to handle."
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Transforming datasets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above principle of *immutability* is also followed by all other functions that transform datasets. The most prominent such transformation is the application of ground point filter pipelines. It is of such importance, that it is covered in a separate [notebook on filter pipelines](filtering.ipynb). Other data transformations are e.g. `remove_classification` which removes any existing classification data from a dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = adaptivefiltering.remove_classification(ds)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we have chosen to assign the transformed dataset to the same name as the original dataset. This is not violating the principle of immutability, because we explicitly chose to do so."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Saving datasets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we have achieved a result that is worth storing, we can save the dataset to a LAS/LAZ file by calling its `save` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds.save(\"without_classification.las\", compress=False, overwrite=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above, the first argument is the filename to save to (relative paths are interpreted w.r.t. the current working directory). Optionally, LAZ compression can be activated by setting `compress=True`. If an existing file would be overwritten, explicit permission needs to do that needs to be granted by setting `overwrite=True`."
]
}
],
"metadata": {
"interpreter": {
"hash": "7a42a518fa29c240d94160a104e2571f110fb503155511d5e924fad0a3805a00"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
147 changes: 0 additions & 147 deletions jupyter/demo.ipynb

This file was deleted.

Loading

0 comments on commit 7538c1f

Please sign in to comment.