-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve default algo / metrics / opener #14
Comments
I think that from the test framework, the only output we can see is the potential score. Which means in order to add checks on the execution, we'd have to add testtuples pretty much everywhere. This would make tests slower to run. Is this still a thing we want to do? |
Yes, indeed currently the only output is the score. In my point of view if the algo is validated in at least one test it would be enough. |
In my opinion we can close it, this ticket is too old :) |
The current implementation of the algo, the metrics and the opener are returning hard coded values that don't depend on the input values.
It doesn't allow to check that the output models / predictions and score are computed as expected.
It will be better if the implementation of these assets make sense from a Machine Learning point of view.
Tasks:
The text was updated successfully, but these errors were encountered: