Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modular coreg.py revision #71

Merged
merged 17 commits into from
Apr 8, 2021

Conversation

erikmannerfelt
Copy link
Contributor

@erikmannerfelt erikmannerfelt commented Apr 5, 2021

The entire coregistration structure has been revisited in this PR, with modularity and simplicity in focus.
All approaches are now sublasses of the Coreg class, which is heavily inspired by the structure employed in scikit-learn. Chaining coregistration functions is as simple as adding them using the "+" operator, or by explicitly constructing a CoregPipeline from a list[Coreg].

Right now, I've added a lot of code, and have marked the old stuff with DeprecationWarnings.
Later (or in this PR), I'll remove all of the old code.

Data

In the following examples, I will use the same data, defined below:

# Download the necessary data. This may take a few minutes.
xdem.examples.download_longyearbyen_examples(overwrite=False)

### Load the data using xdem and geoutils (could be with rasterio and geopandas instead)
# Load a reference DEM from 2009
reference_dem = xdem.DEM(xdem.examples.FILEPATHS["longyearbyen_ref_dem"])
# Load a moderately well aligned DEM from 1990
dem_to_be_aligned = xdem.DEM(xdem.examples.FILEPATHS["longyearbyen_tba_dem"]).reproject(reference_dem)
# Load glacier outlines from 1990. This will act as the unstable ground.
glacier_outlines = gu.Vector(xdem.examples.FILEPATHS["longyearbyen_glacier_outlines"])


# Prepare the inputs for coregistration.
ref_data = reference_dem.data  # This is a numpy 2D array/masked_array
tba_data = dem_to_be_aligned.data  + 50 # This is a numpy 2D array/masked_array with an artificial bias
mask = glacier_outlines.create_mask(reference_dem) != 255  # This is a boolean numpy 2D array where inliers=True
transform = reference_dem.transform  # This is a rio.transform.Affine object.

Syntax

All coregistration approaches have the same parent class; Coreg, which provides the following methods:

  • .fit() for estimating the transform.
  • .apply() for applying the transform to a DEM.
  • .apply_pts() for applying the transform to a set of 3D points.
  • .to_matrix() to convert the transform to a 4x4 transformation matrix, if possible.

The coregistration classes can be stacked sequentially into a CoregPipeline using two types of syntax:

step1 = coreg.BiasCorr()
step2 = coreg.ICP()

pipeline = coreg.CoregPipeline([step1, step2])
same_pipeline = step1 + step2

The coregistration methods are no longer dependent on GeoUtils, to hopefully appeal to a larger audience.

DEMs are np.ndarrays or np.masked_arrays, masks are boolean np.ndarrays, and transforms are rio.transform.Affine objects.

I envision the DEMCollection object to be a GeoUtils-centred interface to coregistration, for example with a DEMCollection.coregister() function or similar.

Examples

Standard Nuth and Kääb (2011)

This now has a bias correction built in, in case people would use only this one.

from xdem import coreg

nuth_kaab = coreg.NuthKaab(max_iterations=50, error_threshold=0.05)

# Fit the data to a suitable x/y/z offset.
nuth_kaab.fit(ref_data, tba_data, transform=transform, mask=mask)

# Apply the transformation to the data (or any other data)
aligned_dem = nuth_kaab.apply(tba_data, transform=transform)

ICP with bias correction

I like ICP, but it sometimes works poorly if there is a large bias between the products.
As a solution, I will chain a bias correction and ICP to run sequentially:

pipeline = coreg.BiasCorr() + coreg.ICP()

# This will first estimate the bias, apply it, and subsequently estimate the ICP transform 
pipeline.fit(ref_data, tba_data, mask=mask, transform=transform)

# This sequentially applies the Coreg.apply() functions
aligned_dem_array = pipeline.apply(tba_data, transform=transform)

# This prints the combined matrix of all steps
print(pipeline.to_matrix())

outputs:

[[ 9.99999990e-01  6.10542619e-05 -1.29476318e-04  3.39646143e+00]
 [-6.10697122e-05  9.99999991e-01 -1.19328652e-04  1.73416704e+00]
 [ 1.29469031e-04  1.19336558e-04  9.99999984e-01 -5.48167431e+01]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

Note the bias (index 2,3) (-4.8 m when removing the artificial bias) which is close to what I get without the bias addition (-5.5 m). I presume that the difference of 7 dm is because ICP is generally bad with biases, so the BiasCorr + ICP approach is better.

Ultimate "bias + ICP + Nuth and Kääb (2011)" combo

@adehecq and I had some discussions around his KH-9 DEMs.
Nuth and Kääb (2011) is not enough because there are considerable rotations involved, but ICP stops at local minima and provides only sub-acceptable results.
Plus, ICP needs a bias correction since the KH-9 DEMs were something like 1800 m off in the start!
Here's a pipeline that does all of it:

pipeline = coreg.BiasCorr() + coreg.ICP() + coreg.NuthKaab()

pipeline.fit(ref_data, tba_data, mask=mask, transform=transform)

print(pipeline.to_matrix())

prints this for the Longyearbyen examples:

[[ 9.99999990e-01  6.10542619e-05 -1.29476318e-04 -2.52099627e+00]
 [-6.10697122e-05  9.99999991e-01 -1.19328652e-04 -9.10047542e+00]
 [ 1.29469031e-04  1.19336558e-04  9.99999984e-01 -5.43112118e+01]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]

There's quite a large difference in the x-component (index 1,3; -9.1 m here vs. 1.73 m above) which fits well with the fact that we saw a considerable east-west offset when only using ICP.
While the above step takes a while, I feel like this would be the go-to combination with poorly aligned datasets.

Making more Coreg sublasses

I wrote all of this with the intention of it being possible to extend by others.
The user interface is defined by the Coreg class, which formats the data, and provides it to hidden methods which are defined by the subclass. Making a new subclass therefore only requires the __init__() method and four others:

class NewCoregApproach(Coreg):
    def __init__(self, param1, param2):
        self.param1 = param1
        self.param2 = param2

        # This is where all the metadata and fitting data are stored. 
        # I couldn't find a way to automatically make the attribute, so it has to be repeated in every subclass..
        self._meta: dict[str, Any] = {}

    def _fit_func(self, ref_dem: np.ndarray, tba_dem: np.ndarray, transform: Optional[rio.transform.Affine], weights: Optional[np.ndarray]):
        ## do the fitting magic here, and save it in self._meta

    def _apply_func(dem: np.ndarray, transform: rio.transform.Affine) -> np.ndarray:
          # do the apply magic here and return the result
          # 'dem' will always be a np.ndarray with NaNs as nodata values

    def _apply_pts_func(self, coords: np.ndarray) -> np.ndarray:
        # do the apply-to-points magic here and return the result
        # 'coords' will always have the shape (N, 3)

    def _to_matrix_func(self) -> np.ndarray:
        # do the to_matrix magic here and return the result

For a functional example, please see BiasCorr as this is the simplest formulation.

And before anyone asks, _apply_func always takes a np.ndarray, but the .apply() userspace function takes either np.ndarrays or np.masked_arrays and returns the same type as its input.

Documentation

I have spent quite some time and effort to make the documentation clear for people. You can see examples and explanations that I have written in my fork:
https://xdem-erikm.readthedocs.io/en/coreg_class/index.html

As of right now (05/04/2021), the page index in readthedocs has gone haywire. I don't know what that's about, but the rest looks as it should. (EDIT: This seems to be an issue that affects others as well)

On the todo list

  • Add a .error() method or property to all functions. NMAD? But NMAD doesn't show potential biases.
  • Allow for @adehecq 's filters to be joined in a pipeline, e.g coreg.ICP() + filters.NMAD(3) + coreg.NuthKaab()
  • Improve performance on CoregPipeline.apply() by combining matrices instead of calling .apply() on every Coreg class inside. This would mean only one reprojection is done once. It would only work when .to_matrix() is supported of course (e.g. Deramp(degree=2) cannot be described as a matrix).
  • Maybe move Deramp and BiasCorr into xdem/biascorr.py. I propose that they should still be a Coreg subclass, however.
  • Add subsampling argument to the .fit() method to improve performance at the cost of quality. (added in
    593ef00)
  • Test the classes on more difficult data than the "best-case" Longyearbyen examples.
  • Add verbose and/or callback arguments for progress updates.

Hope you like it! It's taken quite some time to get right, but I am pretty happy about the result. Especially with the CoregPipeline class!

@rhugonnet
Copy link
Contributor

Wow this is truly amazing !! And glad to see I'm not the only one working during Easter ;)
I already had a first read, and I will come back to look at all the details later this afternoon 😃

Copy link
Contributor

@rhugonnet rhugonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I've read through coreg.py twice now!! (and once for all the rest).

C'est vraiment génial 🥳 ! Well thought and organized. Really impressed (and learned quite a lot) with this synergy of Coreg and CoregPipeline, embedded in the recursivity of certain Class methods like __add__ and in the overriden Subclass methods like_fit_func etc.
All this is going to be extremely useful for practicality, reproducibility, modularity... It's a dream come true, really!

I had almost no specific comment because all seems really great as it is. We can debug/solve issues if we start having some when using it!

Moving forward, I have two general comments:

  • I'm thinking that maybe it would be better to have the full co-registration methods (and all their sub-methods) lie outside of the Classes, for both readability (they would live next to their associated subfunctions) and reuse outside class methods (otherwise you absolutely need a Coreg object to run those). This would also make it easier to grasp for user less familiar with classes. The class methods would then always be wrapper quite easy to write:
class NuthKaab(Coreg)
    def _fit_func(self, ref_dem: np.ndarray, tba_dem: np.ndarray, transform: Optional[rio.transform.Affine], weights: Optional[np.ndarray], **kwargs):
        """Estimate the x/y/z offset between two DEMs."""

        offset_east, offset_north, bias = nuth_kaab(ref_dem, tba_dem, transform, weights, **kwargs)

        self._meta["offset_east_px"] = offset_east
        self._meta["offset_north_px"] = offset_north
        self._meta["bias"] = bias
  • My second thought is that we could extend the pipeline object to encompass even more methods than just the ones in coreg.py (you probably already had this in mind, Erik!). For instance it would be great to be able to do:
step1 = filters.NMAD()
step2 = coreg.BiasCorr()
step3. = filters.MedianKernel()
step4 = coreg.ICP()
step5 = biascorr.AlongTrack()

pipeline = step1 + step2 + step3 + step4 + step5

I don't know what would be best there... probably to have classes for BiasCorr, Filters and Coreg object, and one CoregPipeline (or other naming) to englobe them all? (one ring to rule them all 🧝 )

In any case, again superb work! 👍
Can't wait to start contributing to and using all this (I will be active again very soon, started drowning a bit again with some data sharing for the global study last week, but it's almost all done now!).

xdem/coreg.py Show resolved Hide resolved

transform_mgr.add_transform(i, i + 1, new_matrix)

return transform_mgr.get_transform(0, len(self.pipeline))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this part was a bit complex, don't know pytransform3d too much but it looks good!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit afraid of matrices haha, so that's why I decided on using pytransform3d. Indeed, the syntax becomes a bit convoluted here, but I think (keywork think, I am not certain) it is necessary. Basically what we want is to merge transforms "A -> B -> C" into just "A-C" (in my case, I've used integer indices instead: "0 -> 1 -> 2" into "0-2").

Maybe there's an easier way to do this, but pytransform3d makes sure it is done right.

return coords_mod

def _to_matrix_func(self) -> np.ndarray:
"""Try to join the coregistration steps to a single transformation matrix."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should we define that everything that can be translated into a single transformation matrix has its place in coreg.py? As in the end it is just relative alignment.
While all other methods (e.g., polynomial fit across track, sum of sin fit along track, curvature or terrain-dependent bias correction) should lie in biascorr.py?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree!


# Assign the newly calculated elevations to the aligned_dem
aligned_dem = new_elevation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would be in favor of the _fit_func and _apply_func only containing a full wrapper of the co-registration methods, and taking the core of those methods out into a normal Python function also living in coreg.py. It's just a matter of clarity/organization.

Advantages I see:
1/ I think it would be easier to grasp for a user that there exist a function for each coreg (and that it is simply wrapper conveniently by a related Coreg class if the user wants it), rather than having to be familiar with class nomenclature + subfunctions that are called to access the core of those functions.
2/ On the same objective, we could have the "full coreg" functions living next to their subfunctions for easier reading (nuth_and_kaab next to get_horizontal_shift, etc...).
3/ It would leave us with the opportunity to apply those function manually in other contexts without depending on the Coreg class. Currently this will be impossible because it is a class method _apply_func.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmmh, interesting point...
Regarding point 1, I believe that once Coreg is well setup, most users won't care at all about subfunctions etc. With the suggested documentation, I find it much easier to use than having to look at individual functions, which might possibly have different structures etc. So that's rather a no on this point.
Regarding point 3, I think it's good to have flexibility and allowing to re-use the functions if possible. On the other hand, most users really won't dive that far down in the functionalities. The question is, will we need to access these functions ourselves? Right now I can't think of a test case. If so, I think we can still work around it by calling the class and running the _fit_func and _apply_func functions and reading the output in _meta?
Right now, the only issue I see is that these functions only work with raster data, no point cloud. If we decide to extract the base functions out of the class, I think they should at least accept 3-D and 2-D arrays.


# Check that the to_matrix function works as it should
matrix = biascorr.to_matrix()
assert matrix[2, 3] == bias, matrix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the , works for the == assertion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything after the comma gets printed in the AssertionError:

assert 1 == 2, "One turns out to be different than two"
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-1-d33f3b548b72> in <module>
----> 1 assert 1 == 2, "One turns out to be different than two"

AssertionError: One turns out to be different than two

Just printing the matrix variable is not very helpful any more, I confess. It was for me to check it visually when I developed it!

tests/test_coreg.py Outdated Show resolved Hide resolved
tests/test_coreg.py Show resolved Hide resolved
tests/test_coreg.py Show resolved Hide resolved
@adehecq
Copy link
Member

adehecq commented Apr 7, 2021

I will also try to have a close look at all this today, and hopefully test it locally. I will come back with more feedback!

@erikmannerfelt
Copy link
Contributor Author

Thank you @rhugonnet for the great and motivating feedback!!!

Answers to your general comments:

  1. It can definitely be made as you suggest. Then maybe all coreg functions should return a matrix to be compatible with each other (no x_px, y_px, z_m). Then we could have one function to apply a matrix to a DEM. Right now, NuthKaab and ICP applies shifts differently, and I don't know which is best.
  2. I totally agree on the filter + coreg merging! It's point nr. 2 on my todo list in the top. How about CoregPipeline is renamed to just Pipeline (just like scikit-learn, by the way)? Then the Filter.__add__() method could return a Pipeline instance as well.

@adehecq
Copy link
Member

adehecq commented Apr 7, 2021

First question, in your example above, you wrote:
dem_to_be_aligned = xdem.DEM(xdem.examples.FILEPATHS["longyearbyen_tba_dem"]).resample(reference_dem)
did you mean reproject, rather than resample?

@erikmannerfelt
Copy link
Contributor Author

First question, in your example above, you wrote:
dem_to_be_aligned = xdem.DEM(xdem.examples.FILEPATHS["longyearbyen_tba_dem"]).resample(reference_dem)
did you mean reproject, rather than resample?

Yes!

@erikmannerfelt
Copy link
Contributor Author

erikmannerfelt commented Apr 7, 2021

First question, in your example above, you wrote:
dem_to_be_aligned = xdem.DEM(xdem.examples.FILEPATHS["longyearbyen_tba_dem"]).resample(reference_dem)
did you mean reproject, rather than resample?

@adehecq, I found another typo in my example. The mask should be != 255, not == 255 because the inliers should be True, not the outliers.

That is actually something we should discuss at some point. Is a mask True for the inliers (read: "these are the areas to include") or for the outliers (read: "these are the areas to exclude")??

EDIT: I've fixed the example above.

@adehecq
Copy link
Member

adehecq commented Apr 7, 2021

It could be nice to have a verbose/plot option to print some results or plot figures. Should these options be accepted by all fit functions?

@erikmannerfelt
Copy link
Contributor Author

It could be nice to have a verbose/plot option to print some results or plot figures. Should these options be accepted by all fit functions?

Ah, that should have been on my todo list. Either a verbose flag, a callback, or both.

@erikmannerfelt
Copy link
Contributor Author

Interestingly, in your examples Bias + ICP + NuthKaab performs less well than NuthKaab alone...
That's because ICP introduces some warping of the DEM that NuthKaab cannot correct :-/

Huh.. I have a hunch why this might be. I'll get on it after lunch.

@erikmannerfelt
Copy link
Contributor Author

I think the problem is because I use scipy.interpolate.griddata which honestly just doesn't work that well. I tried NuthKaab and BiasCorr + ICP + NuthKaab and tried changing the interpolation type.

Approach griddata interpolation NMAD (m) time (s)
NuthKaab - 3.10 7.95
BiasCorr + ICP + NuthKaab nearest (original) 3.71 27.91
BiasCorr + ICP + NuthKaab linear 3.49 49.82
BiasCorr + ICP + NuthKaab cubic 3.50 54.0

Note the improvement in error if linear or cubic interpolation is done instead of nearest neighbor, but the time it takes increases by almost 2x.
The reason interpolation is needed is because I right now apply a matrix by converting the DEM to a point cloud, then applying the transform, and re-gridding it (using scipy.interpolate.griddata).
@adehecq or @rhugonnet , if you know a better approach for applying a transformation matrix to a DEM, I would be delighted to implement that instead!

It could be in a general:

def apply_matrix(dem: np.ndarray, transform: rio.transform.Affine, matrix: np.ndarray) -> np.ndarray:
    # do matrix magic
    return transformed_dem

which could be used for any purpose.

@adehecq
Copy link
Member

adehecq commented Apr 7, 2021

I think the problem is because I use scipy.interpolate.griddata which honestly just doesn't work that well. I tried NuthKaab and BiasCorr + ICP + NuthKaab and tried changing the interpolation type.
Approach griddata interpolation NMAD (m) time (s)
NuthKaab - 3.10 7.95
BiasCorr + ICP + NuthKaab nearest (original) 3.71 27.91
BiasCorr + ICP + NuthKaab linear 3.49 49.82
BiasCorr + ICP + NuthKaab cubic 3.50 54.0

Note the improvement in error if linear or cubic interpolation is done instead of nearest neighbor, but the time it takes increases by almost 2x.

Ok, good to know. But I don't think the resampling is the issue here. I looked at the DEM diff in both cases and because ICP allow also for rotation, the quality of the coregistration was not as good over the entire area.

The reason interpolation is needed is because I right now apply a matrix by converting the DEM to a point cloud, then applying the transform, and re-gridding it (using scipy.interpolate.griddata).
@adehecq or @rhugonnet , if you know a better approach for applying a transformation matrix to a DEM, I would be delighted to implement that instead!

I think this approach makes sense as it is more generic. For warping an image, I use scikit warp (https://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.warp) or I saw that OpenCV has also an AffineWarp that could be worth looking at.
For a simple shift, another option is simply to update the georeference information, like done by Raster.shift.

xdem/coreg.py Outdated Show resolved Hide resolved
xdem/coreg.py Show resolved Hide resolved
xdem/coreg.py Outdated Show resolved Hide resolved
xdem/coreg.py Show resolved Hide resolved
subsample = int(np.count_nonzero(full_mask) * (1 - subsample))

# Randomly pick N inliers in the full_mask where N=subsample
random_falses = np.random.choice(np.argwhere(full_mask.flatten()).squeeze(), int(subsample), replace=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's great to think about a subsampling functionality. However, I see that the amount of data that is loaded is still the same. I would favor an option where only a fraction of the data was loaded instead, to reduce memory usage, but maybe that requires too much re-working?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind if we add this as an issue for future improvement?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

xdem/coreg.py Outdated
bounds, resolution = _transform_to_bounds_and_res(ref_dem, transform)
points: dict[str, np.ndarray] = {}
# Generate the x and y coordinates for the reference_dem
x_coords, y_coords = np.meshgrid(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment about subsampling. It could be done at this stage for example, to reduce memory usage.

xdem/coreg.py Outdated

def _apply_func(self, dem: np.ndarray, transform: rio.transform.Affine) -> np.ndarray:
"""Apply the coregistration matrix to a DEM."""
bounds, resolution = _transform_to_bounds_and_res(dem, transform)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This call seem to almost always be followed by meshgrid to generate a coordinate grid. Maybe have a specific function for this? I think in case for example we implement a subsampling in the future, this would have to be replaced only once.
The only cases where _transform_to_bounds is used is, I believe, to get the resolution, which is actually contained in the transform itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure the resolution is not contained in the transform. See the rio docs where the height and width have to be supplied each time a raster-specific operation is made. I believe the Affine transform only contains information of the upper left corner, the scale, and the rotation, but I might be wrong.

Anyway, I cleaned up the code a bit and made a function for x_coords and y_coords creation, so we can more easily change this in the future if we want to (56a4d1b)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The resolution is in transform[0] and transform[4]. But I agree, to get the bounds, you need the shape.
My reply contained a typo, what I meant is that in at least one occasion, _transform_to_bounds_and_res was not followed by meshgrid, and in that case, only the resolution was needed, not the bounds, making the call to this function useless.

dem_mod = dem.copy()

for coreg in self.pipeline:
dem_mod = coreg.apply(dem_mod, transform)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's really fine for now, but in the future, I think this could be done in one step using some matrix magic! ;-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I can add an issue for it.

Problems arise with intermediate non-matrixable steps such as Deramp. It could be implemented in a way where all "matrixable" transforms that occur in a row are merged:

pipeline = [A, B, C (matrix not supported), D, E, F]

apply:
A-B, C, D-F

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but as discussed somewhere else with @rhugonnet, I think all transformations that cannot be converted into a matrix format should go into biascorr.py (this includes deramp) and should be handled a bit differently. So I wouldn't worry to much about deramp here.

xdem/coreg.py Outdated Show resolved Hide resolved
xdem/coreg.py Show resolved Hide resolved
@adehecq
Copy link
Member

adehecq commented Apr 7, 2021

Finally managed to go through all of it!!
Fantastic work Erik! I really like the way it's going. This should make DEM coregistration easy for everyone!
I made some suggestions along the way, but I think this PR should be merged sooner than later. Anything you cannot take into account rapidly can be added as issue or in the projects' tab.

@erikmannerfelt erikmannerfelt marked this pull request as ready for review April 8, 2021 10:09
@erikmannerfelt
Copy link
Contributor Author

Thank you both for your great feedback! As suggested by @adehecq, maybe we can merge this and add the todo's as issues?

xdem/coreg.py Show resolved Hide resolved
@adehecq
Copy link
Member

adehecq commented Apr 8, 2021

Thank you both for your great feedback! As suggested by @adehecq, maybe we can merge this and add the todo's as issues?

Good to go for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement Feature improvement or request
Projects
None yet
3 participants