You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The point of the above example is to show how interchangeable the different regression models are.
The function sklearn.pipeline.make_pipeline() allows for custom regression pipelines to be created. If for example the samples should be normalized before, this could easily be done:
Basically, all coreg approaches should be subclasses of a BaseCoreg class or similar, and all should have the methods .fit() and .transform() (or similar terminology). The make_pipeline() could be adapted from the equivalent in scikit-learn.
The text was updated successfully, but these errors were encountered:
Yes, this looks really great!! I'd be curious of what @fmaussion thinks about this. How is the modular approach of OGGM conceived? I'm not familiar with it myself.
There is no consensus on the optimal coregistration pipeline, so adding modularity would help explore this.
A structure suggestion
It could be done in scikit-learn's fashion for regression models:
The point of the above example is to show how interchangeable the different regression models are.
The function
sklearn.pipeline.make_pipeline()
allows for custom regression pipelines to be created. If for example the samples should be normalized before, this could easily be done:A similar approach could be done for
coreg.py
:... or something similar.
If a custom pipeline is sought:
Basically, all coreg approaches should be subclasses of a
BaseCoreg
class or similar, and all should have the methods.fit()
and.transform()
(or similar terminology). Themake_pipeline()
could be adapted from the equivalent in scikit-learn.The text was updated successfully, but these errors were encountered: