Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monotonicity of some parameters in distribution #338

Open
thomasfrederikhoeck opened this issue Nov 1, 2023 · 3 comments
Open

Monotonicity of some parameters in distribution #338

thomasfrederikhoeck opened this issue Nov 1, 2023 · 3 comments

Comments

@thomasfrederikhoeck
Copy link

For some datasets (typically modeling physical properties) one knows that some montone constrain can be applied between a feature and the prediction which can help bring down noise and ensure meaningfull relative predictions.

In the point-prediction-world this can be done with a model like HistGradientBoostingRegressor using the monotonic_cst setting (see link).

When modeling using a parameterized distribution ( like ngboost does) one would probably only want to apply this constrain to some of the parameters in the distribution i.e. to the loc of a Normal and let the scale be unconstrained. How would one go about using base learners with different setting for different parameters?

@alejandroschuler
Copy link
Collaborator

Oh that's an interesting idea. Right now I don't think it can be done but it wouldn't be very hard to modify the code to allow it. Tbh something chatGPT could probably tackle! Feel free to put in a PR.

@thomasfrederikhoeck
Copy link
Author

@alejandroschuler just for my understading then: the distribution parameters do not need to share a base learner - they just do right now because there was no use case for them to be different?

@alejandroschuler
Copy link
Collaborator

Yep!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants