-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v.0.5 to Production #360
v.0.5 to Production #360
Conversation
1) experimental automl capabilities: - added Params() for generating parameter dictionaries - added KerasModel() for generating network architectures 2) namespace cleaning: The namespace now consist only of actionable/useful items. The top-level consist of commands which all are classes so can be identified being camel-case and utils and templates lead to second-level items. Check it out for yourself to learn more. Further, examples has a third-level which is datasets, models, and pipelines. 4) fixes and cleanups - cleaned up the naming in ParamGrid - changed all the numpy imports to come from np - improved scan_object.data to numeric conversion 4) added new tests
- added models and params under ta.templates - now cervical cancer, breast cancer, iris, and titanic have dataset, params, and input model - network_shape() now return [0] if params['hidden_layers'] is 0. This permits the situation where 0 is one option in the experiment - comprehensive functionality tests for templates added
- pipeline added for breast cancer and cervical cancer - tests added for all pipeline templatessAAsas
- fixed an issue with KerasModel() that aftected conv1d, lstm, and simplernn - added Bidirectional LSTM to KerasModel() - added two new models to /templates - made scan_finish() dtype conversion more allowing for special cases and errors as such columns should not be forcefully converted in any case (the main thing is to have the metrics converted always) - fixed ta.Params() which had 'loss' instead of 'losses' (which creates problems later in the results table
Added stopping after wall clock time
Allow direct setting of experiment_name
removed use of np.prod to avoid using limited size integers solving #244
Add ability to filter out unwanted premutations
Compatibility merge to daily-dev
- moved permutation_filter to its own function. The question here is that would it be meaningful to support boolean statement directly, or is there benefit from having the lambda? - premutation >> permutation - fixed the case where unrecognized random_method failed silently, no warns the user and falls back to 'uniform_mersenne' - added missing doctrings to Scan() - the tests are now organized in a meaningful way: """ The tests below have to serve several purpose: - test possible input methods to params dict - test binary, multi class, multi label and continuous problems - test all Scan arguments Each problem type is presented as a Class, and contains three experiments using single, list, or range inputs. There is an effort to test as many scenarios as possible here, so be inventive / experiment! Doing well with this part of the testing, there is a healthy base for a more serious approach to ensuring procedural integrity. """
Backwards Master to Daily-Dev
- moved permutation_filter to /reducers - enabled round duration and times (as attribute in Scan() object) - went through all the files for pep issues ... only some borderline long rows remain, otherwise pepified - changed max_start_... to time_limit for consistency with round_limit - edited tests to support the changes
- added gpu_utils and generator to ta.utils... - added tests for generator.py
v.0.5 to Master
v.0.5 to Dev
v.0.5 to Master (ACTUAL)
Hello @mikkokotila! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2019-08-02 15:33:24 UTC |
Pull Request Test Coverage Report for Build 508
💛 - Coveralls |
added `allow_pickle=True` to `np.load()`
See individual commit details for more information.