Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document and test loss function signatures #50

Open
basnijholt opened this issue Dec 19, 2018 · 4 comments
Open

Document and test loss function signatures #50

basnijholt opened this issue Dec 19, 2018 · 4 comments

Comments

@basnijholt
Copy link
Member

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-23T19:06:55.212Z

A loss function is a significant part of the interface of each learner. It provides the users with nearly infinite ways to customize the learner's behavior, and it is also the main way for the users to do so.

As a consequence I believe we need to do the following:

  • Each learner that allows a custom loss function must specify the detailed call signature of this function in the docstring.
  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.
  • All loss functions that we provide should instead be factory functions that return a loss function whose call signature conforms to the spec. For example learner2D.resolution_loss(ip, min_distance=0, max_distance=1) does not conform to the spec, and is not directly reusable. Instead this should have been a functools.partial(learner2D.resolution_loss, min_distance=0, max_distance=1).
  • We should convert all our loss functions that have arbitrary hard-coded parameters into such factory functions, and we should test their conformance to the spec.
@basnijholt
Copy link
Member Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-11-21T20:54:13.437Z on GitLab

Also we probably shouldn't be naming factory functions for loss functions get_XXX_loss.

@basnijholt
Copy link
Member Author

originally posted by Bas Nijholt (@basnijholt) at 2018-12-07T19:21:26.066Z on GitLab

@anton-akhmerov I think we addressed these points (except the second one) recently.

I don't really understand what you mean with

  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.

Should we just check the data type? Is that what you mean? If so, why would this be useful?

@basnijholt
Copy link
Member Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:31:19.691Z on GitLab

I think we addressed these points (except the second one) recently.

I cannot confirm that learners clearly document the loss format.

  • Learner1D
  • Learner2D. I may be overly nitpicky here, but the description seems rather vague. Also I think it should go into the parameters section and not the notes.
  • LearnerND

Did I miss any learner with customizable loss?

@basnijholt
Copy link
Member Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:32:56.682Z on GitLab

Should we just check the data type? Is that what you mean? If so, why would this be useful?

I think that makes sense for the purpose of API stability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant