Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize loss interpolation #71

Open
basnijholt opened this issue Dec 19, 2018 · 3 comments
Open

Generalize loss interpolation #71

basnijholt opened this issue Dec 19, 2018 · 3 comments

Comments

@basnijholt
Copy link
Member

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-02T14:17:22.226Z

With gitlab:#52, we obtain a better universality for user-provided loss functions. However right now we always split the loss of the parent interval proportionally into the children intervals. This prevents the user from guaranteeing that the intervals never become shorter than a certain size (e.g. machine precision) by means of merely redefining the loss.

I am not quite sure how we should address this though.

@basnijholt
Copy link
Member Author

originally posted by Jorn Hoofwijk (@Jorn) at 2018-07-13T08:56:20.027Z on GitLab

Maybe add a separate threshold parameter to the learner, by default being some small value, which indicates the minimal size of a simplex relative to the entire domain. Then as soon as the volume of a simplex drops below this threshold, we do not split it anymore, regardless of the loss.

Then some simplices could become smaller than the threshold, but they won't become indefinitely small

@basnijholt
Copy link
Member Author

originally posted by Bas Nijholt (@basnijholt) at 2018-12-07T19:56:32.592Z on GitLab

Why can't one just set the loss to 0 for the interval that is "done"?

@basnijholt
Copy link
Member Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:17:13.051Z on GitLab

Are the learners guaranteed to ignore loss 0? Do we require or document that the loss is positive anywhere?

Also higher order interpolation schemes (e.g. cquad) would give loss estimates that vary within the interval, and linear interpolation isn't the correct thing to do then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant