Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve default algo / metrics / opener #14

Closed
3 tasks
samlesu opened this issue Nov 6, 2019 · 4 comments
Closed
3 tasks

Improve default algo / metrics / opener #14

samlesu opened this issue Nov 6, 2019 · 4 comments

Comments

@samlesu
Copy link
Contributor

samlesu commented Nov 6, 2019

The current implementation of the algo, the metrics and the opener are returning hard coded values that don't depend on the input values.

It doesn't allow to check that the output models / predictions and score are computed as expected.

It will be better if the implementation of these assets make sense from a Machine Learning point of view.

Tasks:

  • Define structure to use (must be simple to implement and simple to use, i.e. expected results should be easy to compute on the side)
  • Implement it
  • Improve existing tests to ensure the task outputs are correct
@samlesu samlesu changed the title Improve default algo / metrics Improve default algo / metrics / opener Nov 6, 2019
@jmorel
Copy link
Contributor

jmorel commented Jan 27, 2020

I think that from the test framework, the only output we can see is the potential score. Which means in order to add checks on the execution, we'd have to add testtuples pretty much everywhere. This would make tests slower to run. Is this still a thing we want to do?

@samlesu
Copy link
Contributor Author

samlesu commented Jan 27, 2020

Yes, indeed currently the only output is the score.

In my point of view if the algo is validated in at least one test it would be enough.
If the model / data samples have not a valid format the train/test tuple will (should) not be able to run successfully.

@Kelvin-M
Copy link
Contributor

Is it still relevant @samlesu @jmorel ?

@samlesu
Copy link
Contributor Author

samlesu commented Jan 18, 2021

In my opinion we can close it, this ticket is too old :)

AlexandrePicosson pushed a commit that referenced this issue Sep 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants