-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.2.0 version preparation #49
Conversation
major fix update to 0.1.9.5
- replaced type comparison with isinstance - more pepification
- split handling() into ParamGrid() and round_params() - round_params() uses the ParamGrid() object - deleted handling() NOTE: there are some things in Scan() that can be done cleaner in the sense of avoiding redundancy, but I'm leaving that for a full refactor of Scan().
- added a function that returns all possible standard metric names (and round_epochs) - fixed spear reducer from the case where just columns after the 7th are included, to where all columns but those in metric_names are included - fixed 'loss' to 'losses' in p3 in test_scan()
- made clearing tf session every round a parameter (False by default)
- refactored Scan() and the way how parameters and reducers are handled - added an example iris model to examples/ - renamed notebooks to examples
Works in cases where a reduction is used, and where it's not.
- scan is now in its own submodule/folder - reduction files are now following the new convention - major refactor of reduction procedures
- version marking now just in main __init__.py and setup.py - fixed correlation reduction algo to spearman (it seems rank order makes most sense here) - deleted redundant files - added pycharm specific things to .gitignore - changed version to 0.2.0
Hello @mikkokotila! Thanks for submitting the PR.
|
Pull Request Test Coverage Report for Build 147
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You really went all out on this! I didn't' explicitly check everything in depth, but it looks great. If it passes the unit checks we know that everything is linked up properly at least!
I would only comment that maybe for the near future we should add a unit check that evaluates a particular instance of training data and yields a predictable result. In other words, let's not just make sure Scan()
is working, let's make sure it works properly on a dummy model.
What do you think? Shall I download this version to see or did you test it on a simple example to make sure the results make sense? If so I think its probably fine.
Once you merge this into dev I'll pull it and play around with it for a bit. I might implement some of the suggestions I mentioned. |
Thanks! I think it's a really great idea to add some actual validation tests. I've been testing with the Iris dataset, but will test with 2 other datasets tomorrow. We could then use those as the first validation tests and incorporate before merging with master. |
The big thing here is the refactoring of Scan() and related procedures. Params refactor is not included here yet.
Feature wise the biggest change here has to do with reduction. I'm going to keep testing this over the past few days, but basically I've rebuilt the "spear" reducer, which is an attempt to automate the process I use iteratively to optimize the optimization i.e. a solver for the hyperparameter solver. Much testing is needed still (will work on that this week). Hopefully by end of this week we could update master, test for a week or two and push to production.
More details from the below commit list.