Skip to content

Commit

Permalink
Merge f43cf9d into d3d894f
Browse files Browse the repository at this point in the history
  • Loading branch information
NicoRenaud committed Feb 22, 2018
2 parents d3d894f + f43cf9d commit a59683a
Show file tree
Hide file tree
Showing 119 changed files with 72,093 additions and 157 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,5 @@ ENV/
# PyCharm
.idea/workspace.xml
.idea/tasks.xml
.idea/eEcoLiDAR.iml
.idea/misc.xml
2 changes: 1 addition & 1 deletion .idea/eEcoLiDAR.iml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .idea/misc.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

32 changes: 32 additions & 0 deletions .prospector.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# prospector configuration file

---

output-format: grouped

strictness: veryhigh
doc-warnings: true
test-warnings: false

pyroma:
run: true

pep8:
full: true
options:
max-line-length: 120

pylint:
options:
max-line-length: 120
variable-rgx: '[a-z_][a-z0-9_]{0,30}$'

pep257:
# see http://pep257.readthedocs.io/en/latest/error_codes.html
disable: [
# Disable because not part of PEP257 official convention:
D203, # 1 blank line required before class docstring
D212, # Multi-line docstring summary should start at the first line
D213, # Multi-line docstring summary should start at the second line
D404, # First word of the docstring should not be This
]
3 changes: 3 additions & 0 deletions .style.yapf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[style]
# The column limit.
column_limit=120
12 changes: 6 additions & 6 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,30 +8,30 @@ python:
- "3.5"
- "3.6"

before_install:
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh;
before_install:
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh;
else
wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
fi
- bash miniconda.sh -b -p $HOME/miniconda
- export PATH="$HOME/miniconda/bin:$PATH"
- conda config --set always_yes yes
- conda update -q conda
- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION pip openblas numpy scipy
- source activate test-environment
- "pip install -q --upgrade 'pip'"
- "pip install -q 'coverage'"
- "pip install -q 'pytest-cov'"
- "pip install -q 'codacy-coverage'"
- "pip install -qr requirements.txt"
- "pip install -r requirements.txt"
- "pip install -q coveralls"

install:
- "echo done"

# command to run tests, e.g. python setup.py test
script:
- cd laserchicken
- pytest --cov=laserchicken --cov-report xml:coverage.xml

after_script:
Expand Down
28 changes: 28 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Contributing

## Pull Request Submission Guidelines

Before you submit your pull request consider the following guidelines:
* Please communicate with us up front about any new feature you would like to add, to avoid disappointment later. You can do this by creating an [issue](https://github.com/eEcoLiDAR/eEcoLiDAR/issues)
* Fork the repository to your own github account if you don't have write access
* Clone the repository on your local machine
* Make your changes in a new git branch:
`git checkout -b my-fix-branch master`
* Install the development environment:
`python setup.py develop`
* Make your changes and add tests demonstrating that you fixed the bug or covering the new feature you added
* Order your imports:
`isort -w 120 your_changed_file.py`
* Format your code according the the project standard:
`yapf -i your_changed_file.py`
* Check that your code is clean and fix any issues:
`prospector your_changed_file.py`
* Run tests and make sure they pass:
`python setup.py test`
* Commit your changes and upload:
```
git add changed_file_1.py changed_file_2.py
git commit -m 'Your commit message'
git push
```
* Create a [pull request](https://github.com/eEcoLiDAR/eEcoLiDAR/pulls)
20 changes: 20 additions & 0 deletions Data_Formats_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,26 @@ To stay close to the chosen file format the python data structure will look like
'y': {'type': 'float', 'data': np.array([0.1, 0.2, 0.3])},
'z': {'type': 'float', 'data': np.array([0.1, 0.2, 0.3])},
'return': {'type': 'int', 'data': np.array([1, 1, 2])}}}
}
}
```
After we compute some features we enrich the data structure with extra attributes:
```
{'log': [{'time': '2018-01-18 16:01', 'module': 'load', 'parameters': [], 'version': '0.9.2'},
{'time': '2018-01-18 16:01', 'module': 'filter', 'parameters': [('z', 'gt', '1.5')], 'version': '0.9.2'}],
'pointcloud':
{'offset': {'type': 'double', 'data': 12.1}},
'vertex':
{'x': {'type': 'float', 'data': np.array([0.1, 0.2, 0.3])},
'y': {'type': 'float', 'data': np.array([0.1, 0.2, 0.3])},
'z': {'type': 'float', 'data': np.array([0.1, 0.2, 0.3])},
'return': {'type': 'int', 'data': np.array([1, 1, 2])}}},
'eigen_val_1': {'type': 'float', 'data': [0.1, 0.5, 0.25 ])},
'eigen_val_2': {'type': 'float', 'data': [0.02, 0.05, 0.025 ])},
...
'echo_ratio': {'type': 'float', 'data': np.array([0.05, 0.04, 0.36])}
}
```

This gives us the three data types that we want to store in the memory (and file):
Expand Down
62 changes: 62 additions & 0 deletions laserchicken/compute_neighbors.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
from laserchicken import utils, kd_tree
from laserchicken.volume_specification import Sphere, InfiniteCylinder


def compute_cylinder_neighborhood(environment_pc, target_pc, radius):
"""Find the indices of points within a cylindrical neighbourhood (using KD Tree)
for a given point of a target point cloud among the points from an environment point cloud.
:param environment_pc: environment point cloud
:param target_pc: point cloud that contains the points at which neighborhoods are to be calculated
:param radius: search radius for neighbors
:return: indices of neighboring points from the environment point cloud for each target point
"""

env_tree = kd_tree.get_kdtree_for_pc(environment_pc)
target_tree = kd_tree.get_kdtree_for_pc(target_pc)
return target_tree.query_ball_tree(env_tree, radius)


def compute_sphere_neighborhood(environment_pc, target_pc, radius):
"""
Find the indices of points within a spherical neighbourhood for a given point of a target point cloud among
the points from an environment point cloud.
:param environment_pc: environment point cloud
:param target_pc: point cloud that contains the points at which neighborhoods are to be calculated
:param radius: search radius for neighbors
:return: indices of neighboring points from the environment point cloud for each target point
"""

neighborhood_indices = compute_cylinder_neighborhood(environment_pc, target_pc, radius)

result = []
for i in range(len(neighborhood_indices)):
target_x, target_y, target_z = utils.get_point(target_pc, i)
neighbor_indices = neighborhood_indices[i]
result_indices = []
for j in neighbor_indices:
env_x, env_y, env_z = utils.get_point(environment_pc, j)
if abs(target_z - env_z) > radius:
continue
if (env_x - target_x) ** 2 + (env_y - target_y) ** 2 + (env_z - target_z) ** 2 <= radius ** 2:
result_indices.append(j)
result.append(result_indices)
return result


def compute_neighborhoods(env_pc, target_pc, volume_description):
"""
Find a subset of points in a neighbourhood in the environment point cloud for each point in a target point cloud.
:param env_pc: environment point cloud
:param target_pc: point cloud that contains the points at which neighborhoods are to be calculated
:param volume_description: volume object that describes the shape and size of the search volume
:return: indices of neighboring points from the environment point cloud for each target point
"""
volume_type = volume_description.get_type()
if volume_type == Sphere.TYPE:
return compute_sphere_neighborhood(env_pc, target_pc, volume_description.radius)
elif volume_type == InfiniteCylinder.TYPE:
return compute_cylinder_neighborhood(env_pc, target_pc, volume_description.radius)
raise ValueError('Neighborhood computation error because volume type "{}" is unknown.'.format(volume_type))
72 changes: 72 additions & 0 deletions laserchicken/density.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
from shapely.geometry import Point, MultiPoint

import numpy as np
from laserchicken.keys import point
import math

def area_density_(pc, r):
rad = r
x = pc[point]['x']['data']
if (rad == None):
y = pc[point]['y']['data']
points = []
for i in range(len(x)):
points.append(Point(x[i], y[i]))
multi_point = MultiPoint(points)
mbr = multi_point.convex_hull.envelope
(x_min, y_min, x_max, y_max) = mbr.bounds

rad = math.ceil(math.sqrt(math.pow(x_max - x_min, 2) + math.pow(y_max - y_min, 2)) / 2)

if (rad <= 0):
raise ValueError("The radious should bigger than zero.")

area = math.pi*math.pow(rad,2)

return len(x)/area

def volume_density_(pc, r):
rad = r
x = pc[point]['x']['data']
if (rad == None):
y = pc[point]['y']['data']
points = []
for i in range(len(x)):
points.append(Point(x[i], y[i]))
multi_point = MultiPoint(points)
mbr = multi_point.convex_hull.envelope
(x_min, y_min, x_max, y_max) = mbr.bounds

rad = math.ceil(math.sqrt(math.pow(x_max - x_min, 2) + math.pow(y_max - y_min, 2)) / 2)

if (rad <= 0):
raise ValueError("The radious should bigger than zero.")

z = pc[point]['z']['data']
volume = math.pi*math.pow(rad,2) * (np.max(z) - np.min(z))

return len(x)/volume

def area_density(pc):
if pc is None:
raise ValueError('Input point cloud cannot be None.')
return area_density_(pc, None)

def area_density_rad(pc, rad):
if pc is None:
raise ValueError('Input point cloud cannot be None.')
if rad is None:
raise ValueError('Input radious cannot be None.')
return area_density_(pc, rad)

def volume_density(pc):
if pc is None:
raise ValueError('Input point cloud cannot be None.')
return volume_density_(pc, None)

def volume_density_rad(pc, rad):
if pc is None:
raise ValueError('Input point cloud cannot be None.')
if rad is None:
raise ValueError('Input radious cannot be None.')
return volume_density_(pc, rad)
83 changes: 83 additions & 0 deletions laserchicken/feature_extractor/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
"""Feature extractor module."""
import importlib
import re

import numpy as np

from laserchicken import keys, utils
from .eigenvals_feature_extractor import EigenValueFeatureExtractor
from .entropy_feature_extractor import EntropyFeatureExtractor
from .sigma_z_feature_extractor import SigmaZFeatureExtractor


def _feature_map(module_name=__name__):
"""Construct a mapping from feature names to feature extractor classes."""
module = importlib.import_module(module_name)
return {
feature_name: extractor
for name, extractor in vars(module).items() if re.match('^[A-Z][a-zA-Z0-9_]*FeatureExtractor$', name)
for feature_name in extractor.provides()
}


FEATURES = _feature_map()


def compute_features(env_point_cloud, neighborhoods, target_point_cloud, feature_names, volume, overwrite=False,
**kwargs):
"""
Compute features for each target and store result as attributes in target point cloud
:param env_point_cloud: environment point cloud
:param neighborhoods: list of neighborhoods which are themselves lists of indices referring to the environment
:param target_point_cloud: point cloud of targets
:param feature_names: list of features that are to be calculated
:param volume: object describing the volume that contains the neighborhood points
:param overwrite: if true, even features that are already in the targets point cloud will be calculated and stored
:param kwargs: keyword arguments for the individual feature extractors
:return: None, results are stored in attributes of the target point cloud
"""
ordered_features = _make_feature_list(feature_names)
for feature in ordered_features:
if (not overwrite) and (feature in target_point_cloud[keys.point]):
continue # Skip feature calc if it is already there and we do not overwrite
extractor = FEATURES[feature]()
_add_or_update_feature(env_point_cloud, neighborhoods, target_point_cloud, extractor, volume, overwrite, kwargs)
utils.add_metadata(target_point_cloud, type(extractor).__module__, extractor.get_params())


def _add_or_update_feature(env_point_cloud, neighborhoods, target_point_cloud, extractor, volume, overwrite, kwargs):
n_targets = len(target_point_cloud[keys.point]["x"]["data"])
for k in kwargs:
setattr(extractor, k, kwargs[k])
provided_features = extractor.provides()
n_features = len(provided_features)
feature_values = [np.empty(n_targets, dtype=np.float64) for i in range(n_features)]
for target_index in range(n_targets):
point_values = extractor.extract(env_point_cloud, neighborhoods[target_index], target_point_cloud,
target_index, volume)
if n_features > 1:
for i in range(n_features):
feature_values[i][target_index] = point_values[i]
else:
feature_values[0][target_index] = point_values
for i in range(n_features):
feature = provided_features[i]
if overwrite or (feature not in target_point_cloud[keys.point]):
target_point_cloud[keys.point][feature] = {"type": np.float64, "data": feature_values[i]}


def _make_feature_list(feature_names):
feature_list = reversed(_make_feature_list_helper(feature_names))
seen = set()
return [f for f in feature_list if not (f in seen or seen.add(f))]


def _make_feature_list_helper(feature_names):
feature_list = feature_names
for feature_name in feature_names:
extractor = FEATURES[feature_name]()
dependencies = extractor.requires()
feature_list.extend(dependencies)
feature_list.extend(_make_feature_list_helper(dependencies))
return feature_list
Loading

0 comments on commit a59683a

Please sign in to comment.