Permalink
Browse files

Clean documentation and TODOs (#208)

* Clean .travis.yml

* Remove pytest cache

* Remove TODO from examples

* Remove TODO in test euclidean space

* TODO in hypersphere

* Modify Contributing.md

* Remove TODO from connection

* Remove TODO from curves

* Remove unnecessary TODOs in hypersphere

* Remove unnecessary TODOs in matrices

* Clean TODOs in Riemannian metric

* Clean TODO in SE3

* Clean TODOs in SOn

* Clean TODO of visualization

* Clean TODO in test sen

* Clean TODO in hyperbolic space

* Final TODO cleaning

* Modify Contributing
  • Loading branch information...
ninamiolane authored and johmathe committed Dec 31, 2018
1 parent 6e97bcb commit ce31656f363382ee70ef17723c44bbaa51caba5c

This file was deleted.

Oops, something went wrong.
@@ -8,8 +8,6 @@ python:
- "3.6"
- "3.6-dev"

# TODO: Add the test for python 3.3

install:
- pip install --upgrade pip setuptools wheel
- pip install -q -r requirements.txt --only-binary=numpy,scipy
@@ -24,15 +24,11 @@ Most of our discussions take place on [issues][link_issues].
[git][link_git] is a really useful tool for version control.
[GitHub][link_github] sits on top of git and supports collaborative and distributed working.

If you're not yet familiar with `git`, there are lots of great resources to help you *git* started!
If you are not yet familiar with `git`, there are lots of great resources to help you *git* started!
Some of our favorites include the [git Handbook][link_handbook] and
the [Software Carpentry introduction to git][link_swc_intro].

On GitHub, you will use [Markdown][markdown] to chat in issues and pull requests.
You can think of Markdown as a few little symbols around your text that will allow GitHub
to render the text with a little bit of formatting.
For example you could write words as bold (`**bold**`), or in italics (`*italics*`),
or as a [link][rick_roll] (`[link](https://https://youtu.be/dQw4w9WgXcQ)`) to another webpage.

GitHub has a really helpful page for getting started with
[writing and formatting Markdown on GitHub][writing_formatting_github].
@@ -59,9 +55,9 @@ but those accepted fastest will follow a workflow similar to the following:

**1. Comment on an existing issue or open a new issue referencing your addition.**

This allows other members of the ``geomstats`` development team to confirm that you aren't
overlapping with work that's currently underway and that everyone is on the same page
with the goal of the work you're going to carry out.
This allows other members of the ``geomstats`` development team to confirm that you are not
overlapping with work that is currently underway and that everyone is on the same page
with the goal of the work you are going to carry out.

[This blog][link_pushpullblog] is a nice explanation of why putting this work in up front
is so useful to everyone involved.
@@ -73,7 +69,7 @@ Changes here will not effect anyone else's work, so it is a safe space to explor

Make sure to [keep your fork up to date][link_updateupstreamwiki] with the master repository.

**3. Make the changes you've discussed, following the [geomstats coding style guide](#geomstats-coding-style-guide).**
**3. Make the changes you have discussed, following the [geomstats coding style guide](#geomstats-coding-style-guide).**

- Create your feature branch (`git checkout -b feature/fooBar`)
- Commit your changes (`git commit -am 'Add some fooBar'`)
@@ -82,10 +78,20 @@ Make sure to [keep your fork up to date][link_updateupstreamwiki] with the maste
Try to keep the changes focused.
If you feel tempted to "branch out" then please make a [new branch][link_branches].

If you are adding a new feature, don't forget to add the corresponding [unit tests][link_unit_tests].
**4. Add the corresponding unit tests.**

If you are adding a new feature, do not forget to add the corresponding [unit tests][link_unit_tests].
As ``geomstats`` enables numpy and tensorflow, your unit tests should run on these two backends.

**4. Submit a [pull request][link_pullrequest].**
**5. Ensure Geomstats Coding Style Guide.**

Ensure that your code is compliant with [PEP8][link_pep8],
the coding style guide for python.

Use [flake8][link_flake8] or [yapf][link_yapf] to automatically enforces this style.


**6. Submit a [pull request][link_pullrequest].**

A member of the development team will review your changes to confirm
that they can be merged into the main code base.
@@ -99,18 +105,12 @@ Use the Github labels to label your pull request:
* ``refactoring``: refactoring existing code
* ``continuous integration``: updates to continous integration infrastructure

## Geomstats Coding Style Guide

Ensure that your code is compliant with [PEP8][link_pep8],
the coding style guide for python, as enforced by [flake8][link_flake8].

Flake8 is a tool, called a [linter][link_linters] that automatically enforces this style.

## Recognizing Contributions

We welcome and recognize all contributions from documentation to testing to code development.
You can see a list of current contributors in our [README.md][link_readme].
If you are new to the project, don't forget to add your name and affiliation there!
If you are new to the project, do not forget to add your name and affiliation there!

## Thank You!

@@ -143,5 +143,6 @@ If you are new to the project, don't forget to add your name and affiliation the
[link_pep8]: https://www.python.org/dev/peps/pep-0008/
[link_linters]: https://en.wikipedia.org/wiki/Lint_(software)
[link_flake8]: http://flake8.pycqa.org/en/latest/
[link_yapf]: https://github.com/google/yapf
[link_readme]: https://github.com/geomstats/geomstats/blob/master/README.md
[link_fmriprep]: https://github.com/poldracklab/fmriprep/
@@ -67,7 +67,6 @@ def grad(y_pred, y_true,

quat_vec_norm = gs.linalg.norm(quat_vec, axis=1)
quat_sq_norm = quat_vec_norm ** 2 + quat_scalar ** 2
# TODO(nina): check that this sq norm is 1?

quat_arctan2 = gs.arctan2(quat_vec_norm, quat_scalar)
differential_scalar = - 2 * quat_vec / (quat_sq_norm)
@@ -14,7 +14,6 @@
METRIC = SO3_GROUP.bi_invariant_metric

N_SAMPLES = 20
# TODO(nina): find real data set


def main():
@@ -126,7 +126,6 @@ def geodesic(self, initial_point,
"""
Geodesic associated with the connection.
"""
# TODO(nina): integrate the ODE
raise NotImplementedError(
'Geodesics are not implemented.')

@@ -159,7 +158,6 @@ def cometric_matrix(self, base_point):
return cometric_matrix

def metric_derivative(self, base_point):
# TODO(nina): same operation without autograd package?
metric_derivative = autograd.jacobian(self.metric_matrix)
return metric_derivative(base_point)

@@ -184,5 +182,4 @@ def torsion(self, base_point):
"""
Torsion tensor associated with the Levi-Civita connection is zero.
"""
# TODO(nina)
return gs.zeros((self.dimension,) * 3)
@@ -154,7 +154,6 @@ def geodesic(self, initial_curve,
Geodesic specified either by an initial point and an end point,
either by an initial point and an initial tangent vector.
"""
# TODO(alice): vectorize
curve_ndim = 2
initial_curve = gs.to_ndarray(initial_curve,
to_ndim=curve_ndim+1)
@@ -395,8 +394,6 @@ def geodesic(self, initial_curve,
Geodesic specified either by an initial curve and an end curve,
either by an initial curve and an initial tangent vector.
"""
# TODO(alice): vectorize

if not isinstance(self.embedding_metric, EuclideanMetric):
raise AssertionError('The geodesics are only implemented for '
'dicretized curves embedded in a '
@@ -267,8 +267,6 @@ def log(self, point, base_point):
log = (gs.einsum('ni,nj->nj', coef_1, point)
- gs.einsum('ni,nj->nj', coef_2, base_point))

# TODO(nina): This tries to solve the bug of dist not
# being 0 between a point and itself
mask_same_values = gs.isclose(point, base_point)
mask_else = gs.equal(mask_same_values, gs.array(False))
mask_else_float = gs.cast(mask_else, gs.float32)
@@ -285,10 +283,6 @@ def dist(self, point_a, point_b):
"""
Geodesic distance between two points.
"""
# TODO(nina): case gs.dot(unit_vec, unit_vec) != 1
# if gs.all(gs.equal(point_a, point_b)):
# return 0.

norm_a = self.embedding_metric.norm(point_a)
norm_b = self.embedding_metric.norm(point_b)
inner_prod = self.embedding_metric.inner_product(point_a, point_b)
@@ -72,7 +72,6 @@ class MatricesMetric(RiemannianMetric):
"""
Euclidean metric on matrices given by the Frobenius inner product.
"""
# TODO(nina): Inheritance from Euclidean Metric here?
def __init__(self, m, n):
dimension = m*n
super(MatricesMetric, self).__init__(
@@ -243,7 +243,7 @@ def mean(self, points,
"""
Frechet mean of (weighted) points.
"""
# TODO(nina): profile this code to study performance,
# TODO(nina): Profile this code to study performance,
# i.e. what to do with sq_dists_between_iterates.

if isinstance(points, list):
@@ -54,7 +54,6 @@ def __init__(self, n, point_type=None, epsilon=0.):
super(SpecialEuclideanGroup, self).__init__(
dimension=self.dimension)

# TODO(nina): keep the names rotations and translations here?
self.rotations = SpecialOrthogonalGroup(n=n, epsilon=epsilon)
self.translations = EuclideanSpace(dimension=n)

@@ -118,7 +117,6 @@ def regularize(self, point, point_type=None):

elif point_type == 'matrix':
point = gs.to_ndarray(point, to_ndim=3)
# TODO(nina): regularization for matrices?
regularized_point = gs.copy(point)

return regularized_point
@@ -167,8 +165,6 @@ def regularize_tangent_vec(
[rotations_vec, tangent_vec[:, dim_rotations:]], axis=1)

elif point_type == 'matrix':
# TODO(nina): regularization in terms
# of skew-symmetric matrices?
regularized_vec = tangent_vec

return regularized_vec
@@ -537,7 +533,7 @@ def exponential_matrix(self, rot_vec):
coef_2[mask_close_to_0] = (1. / 6.
- angle[mask_close_to_0] ** 3 / 120.)

# TODO(nina): check if the discountinuity as 0 is expected.
# TODO(nina): Check if the discountinuity at 0 is expected.
coef_1[mask_0] = 0
coef_2[mask_0] = 0

@@ -598,7 +594,6 @@ def group_exponential_barycenter(

inv_rot_mats = rotations.matrix_from_rotation_vector(
-rotation_vectors)
# TODO(nina): this is the same mat multiplied several times
matrix_aux = gs.matmul(mean_rotation_mat, inv_rot_mats)
assert matrix_aux.shape == (n_points,) + (dim_rotations,) * 2

@@ -3,8 +3,6 @@
i.e. the Lie group of rotations in n dimensions.
"""

# TODO(nina): make code robust to different types and input structures
# TODO(nina): should the conversion functions be methods?
import geomstats.backend as gs

from geomstats.embedded_manifold import EmbeddedManifold
@@ -146,7 +144,6 @@ def regularize(self, point, point_type=None):

elif point_type == 'matrix':
point = gs.to_ndarray(point, to_ndim=3)
# TODO(nina): regularization for matrices?
regularized_point = gs.to_ndarray(point, to_ndim=3)

return regularized_point
@@ -199,12 +196,10 @@ def regularize_tangent_vec_at_identity(
regularized_vec = mask_else_float * (
regularized_vec / coef)
else:
# TODO(nina): regularization needed in nD?
# TODO(nina): Check if/how regularization is needed in nD?
regularized_vec = tangent_vec

elif point_type == 'matrix':
# TODO(nina): regularization in terms
# of skew-symmetric matrices?
regularized_vec = tangent_vec

return regularized_vec
@@ -227,7 +222,6 @@ def regularize_tangent_vec(
if metric is None:
metric = self.left_canonical_metric
base_point = self.regularize(base_point, point_type)
# TODO: what is the output of this function with N base_points?
n_vecs = tangent_vec.shape[0]

jacobian = self.jacobian_translation(
@@ -252,12 +246,10 @@ def regularize_tangent_vec(
gs.transpose(jacobian,
axes=(0, 2, 1)))
else:
# TODO(nina): is regularization needed in nD?
# TODO(nina): Check if/how regularization is needed in nD?
regularized_tangent_vec = tangent_vec

elif point_type == 'matrix':
# TODO(nina): regularization in terms
# of skew-symmetric matrices?
regularized_tangent_vec = tangent_vec

return regularized_tangent_vec
@@ -266,7 +258,6 @@ def projection(self, mat):
"""
Project a matrix on SO(n), using the Frobenius norm.
"""
# TODO(nina): projection when the point_type is not 'matrix'?
mat = gs.to_ndarray(mat, to_ndim=3)

n_mats, mat_dim_1, mat_dim_2 = mat.shape
@@ -706,7 +697,7 @@ def matrix_from_quaternion(self, quaternion):
rot_mat = gs.zeros((n_quaternions,) + (self.n,) * 2)

for i in range(n_quaternions):
# TODO(nina): vectorize by applying the composition of
# TODO(nina): Vectorize by applying the composition of
# quaternions to the identity matrix
column_1 = [w[i] ** 2 + x[i] ** 2 - y[i] ** 2 - z[i] ** 2,
2 * x[i] * y[i] - 2 * w[i] * z[i],
@@ -970,15 +961,11 @@ def quaternion_from_tait_bryan_angles(self, tait_bryan_angles,
tait_bryan_angles)

elif extrinsic_xyz:
# TODO(nina): Put a direct implementation here,
# instead of converting to matrices first
rot_mat = self.matrix_from_tait_bryan_angles_extrinsic_xyz(
tait_bryan_angles)
quat = self.quaternion_from_matrix(rot_mat)

elif intrinsic_zyx:
# TODO(nina): Put a direct implementation here,
# instead of converting to matrices first
tait_bryan_angles_reversed = gs.flip(tait_bryan_angles, axis=1)
rot_mat = self.matrix_from_tait_bryan_angles_extrinsic_xyz(
tait_bryan_angles_reversed)
@@ -1292,7 +1279,6 @@ def random_uniform(self, n_samples=1, point_type=None):
random_point = self.regularize(
random_point, point_type=point_type)
elif point_type == 'matrix':
# TODO(nina): does this give the uniform distribution on rotations?
random_matrix = gs.random.rand(n_samples, self.n, self.n)
random_point = self.projection(random_matrix)

@@ -173,8 +173,6 @@ def log(self, point, base_point, max_iter=30):
https://arxiv.org/pdf/1604.05054.pdf
"""
# TODO(nina): Add unit tests for auxiliary functions

def normal_component_qr(point, base_point, matrix_m):
"""
Computes the QR decomposition of the normal component
@@ -23,9 +23,6 @@
'H2_poincare_disk', 'H2_poincare_half_plane', 'H2_klein_disk']


# TODO(nina): Clean-up OOP of this module


class Arrow3D():
"An arrow in 3d, i.e. a point and a vector."
def __init__(self, point, vector):
@@ -117,7 +114,6 @@ def __init__(self, n_meridians=40, n_circles_latitude=None,
if n_circles_latitude is None:
n_circles_latitude = max(n_meridians / 2, 4)

# TODO(nina): use backend for this
u, v = np.mgrid[0:2 * gs.pi:n_meridians * 1j,
0:gs.pi:n_circles_latitude * 1j]

@@ -326,7 +326,6 @@ def test_geodesic_and_belongs(self):
self.assertAllClose(expected, result)

def test_mean(self):
# TODO(nina): Fix the fact that it doesn't work for [1., 4.]
point = gs.array([[1., 4.]])
result = self.metric.mean(points=[point, point, point])
expected = point
@@ -87,7 +87,6 @@ def test_intrinsic_and_extrinsic_coords_vectorization(self):
[3., 2., 0., 2.]])
point_int = self.space.extrinsic_to_intrinsic_coords(point_ext)
result = self.space.intrinsic_to_extrinsic_coords(point_int)
# TODO(nina): Make sure this holds for (x, y, z, ..) AND (-x, y,z)
expected = point_ext
expected = helper.to_vector(expected)

@@ -301,7 +300,6 @@ def test_exp_and_log_and_projection_to_tangent_space_general_case(self):
# Riemannian Exp then Riemannian Log
# General case
base_point = gs.array([4.0, 1., 3.0, math.sqrt(5)])
# TODO(nina): this fails for high euclidean norms of vector_1
vector = gs.array([2.0, 1.0, 1.0, 1.0])
vector = self.space.projection_to_tangent_space(
vector=vector,
@@ -324,7 +322,6 @@ def test_dist(self):
self.assertAllClose(result, expected)

def test_exp_and_dist_and_projection_to_tangent_space(self):
# TODO(nina): this fails for high norms of vector
base_point = gs.array([4.0, 1., 3.0, math.sqrt(5)])
vector = gs.array([0.001, 0., -.00001, -.00003])
tangent_vec = self.space.projection_to_tangent_space(
@@ -341,7 +338,7 @@ def test_exp_and_dist_and_projection_to_tangent_space(self):
self.assertAllClose(result, expected, atol=1e-2)

def test_geodesic_and_belongs(self):
# TODO(nina): this tests fails when geodesic goes "too far"
# TODO(nina): Fix this tests, as it fails when geodesic goes "too far"
initial_point = gs.array([4.0, 1., 3.0, math.sqrt(5)])
n_geodesic_points = 100
vector = gs.array([1., 0., 0., 0.])
Oops, something went wrong.

0 comments on commit ce31656

Please sign in to comment.