-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
weekly daily-dev to dev merge #90
Conversation
regular dev to master merge
- added stratified sampling - added latin hypercube sampling with sudoku constraint - added latin hypercub e - commented several functions - moved shuffle as an option in sample_reducer - added 'random_method' parameter to Scan() - cleaned up and modified README.md - added latin hypercube sampling with sudoku constraint submodule - added the new features to tests
# the new Randomizer() supports: - Quantum randomness (vacuum based) - Ambient Sound based randomness - Sobol sequences - Halton sequences - Latin hypercube - Improved Latin hypercube - Latin hypercube with a Sudoku-style constraint - Uniform Mersenne - Cryptographically sound uniform - Reduced testing script rounds to 5 from 10 - FIX: a bug in sample_reduce that prevented stratification from working - TODO: need to think how stratification works with various randomizers and if it makes sense to have a separate option or random_method as it is now Also updated the README.md
Hello @mikkokotila! Thanks for updating the PR.
Comment last updated on October 01, 2018 at 21:17 Hours UTC |
Pull Request Test Coverage Report for Build 234
💛 - Coveralls |
fix a missing SSL cert problem for quantum random number API call
regular master >> daily-dev
- improved / added docstrings - pepified - added Randomizer as a stand-alone package - turned "clear_tf_session" True by default in Scan() - added tf_config function to /utils as a new home for tf resource management functionality
This is now moved into its own package in a separate repo and installs with 'pip install chances'.
TrainingLog adds a callback function with live view of the training process against any metric.
- pepification - moved 'model' to the first parameters in Scan() - made 'dataset_name' and 'experiment_no' optional - 'dataset_name' becomes timestamp with second accuracy if None - added 'print_params' option which prints the params every round
- created custom exceptions / error handling for the most common (and annoying) cases - added an experimental best model saving - created exceptions.py in /utils - fixed an issue in reporting.py with table() where both metric and sort_by needed to be separately stated unless metric was val_acc
- creates two new objects 'saved_weights' and 'saved_models' - stores the model (as json) and the model weights for each round - added new class Predict() in /utils/predict.py
- added kfold splitting to /utils/validation_split.py - added f1-score based model validation
See individual commits for more.