-
Fixed one param visualization bug and typos in documentation
When optimizing one parameter, there were some issues reimporting the saved files for the visualizations to work. This was due to the problematic corner case of zero D or one D with one element arrays in numpy. This has now been sanitized. Also fixed some critical typos in the documentation.
-
Fixed a bug where an attribute wasn’t present in the learner class. Was a problem when attempting to plot the visualizations from a file.
michaelhush committedMar 24, 2017
-
Previous data files can now be imported
Added support for previous data files to be imported into a gaussian process learner.
-
-
-
Use TF gradient when minimizing NN cost function estimate
charmasaur committedDec 9, 2016 -
Still print predicted_best_cost even when predicted_best_uncertainty …
…isn't set
charmasaur committedDec 9, 2016 -
Tweak some NNI params to perform better on the test
charmasaur committedDec 9, 2016
-
Don't do one last train in order to predict minima
at the end. This was causing an exception to be thrown when trying to get costs from the queue.
-
Add (trivial) scaler back to NNL
charmasaur committedDec 4, 2016 -
Set new_params_event in MLC after getting the cost
When generation_num=1, if the new_params_event is set first then the learner will try to get the cost when the queue is empty, causing an exception.
charmasaur committedDec 4, 2016
-
Dumb implementation of predict_costs array version
charmasaur committedDec 3, 2016 -
Pull NNI construction into create_neural_net
charmasaur committedDec 3, 2016
-
Merge branch 'NeuralNetA' of https://github.com/michaelhush/M-LOOP in…
…to NeuralNetA Conflicts: mloop/controllers.py mloop/learners.py
charmasaur committedDec 2, 2016 -
Fix importing/creation of NN impl
We need to specify nnlearner as a package. More subtly, because of TF we can only run NNI in the same process in which it's created. This means we need to wait until the run() method of the learner is called before constructing the impl.
charmasaur committedDec 2, 2016 -
charmasaur committed
Dec 2, 2016 -
Remove scaler from NNController
charmasaur committedDec 2, 2016 -
More NNController tidying/tweaking
charmasaur committedDec 2, 2016 -
Fix number_of_controllers definition
charmasaur committedDec 2, 2016 -
Basic NN learner implementation
I've pulled the actual network logic out into a new class, to keep the TF stuff separate from everything else and to keep a clear separation between what's modelling the landscape and what's doing prediction.
charmasaur committedDec 2, 2016
-
charmasaur committed
Dec 1, 2016 -
NerualNet ready for actually net
There appears to be some issues with multiprocessing and gaussian process but only on MacOS, and possibly just my machine. So I’ve removed all the testing statements I had in the previous commit. Branch should be ready now to integrate in a genuine NN.
-
Added visualization introduced bug
Visualizations now work for NN and GP learners. Mysterious bug has appeared in GP. The scikit-learn stops providing uncertainty predictions after being fit for a certain number of times. Commiting so I can change branch and investigate.
-
Remove unnecessary uncertainty stuff from NNL
charmasaur committedNov 25, 2016 -
charmasaur committed
Nov 25, 2016 -
Fix some minor controller documentation errors
charmasaur committedNov 25, 2016 -
Git complains to me about them when I touch nearby lines, so I figured it was easier just to fix them.
charmasaur committedNov 25, 2016
-
Added a shell for the nerual net
Added a controller and learner for the neural net. Also added a new class MachineLearnerController which GaussianProcess and NeuralNet both inherit from. I broke the visualizations for GPs in this update. But all the tests work.
-
Added some updates to docstrings and test unit parameters.
-
Updated the documentation. Candidate for new version to be released on PyPI
-
Previously the training runs had to be completed before M-LOOP would halt. This lead to unintuitive behavior when the halting conditions were early on in the optimization process. M-LOOP now halts immediately when any of the halting conditions are met.
-
Added additional tests for halting conditions.
Fixed bug with GP fitting data with bad runs.
-
Update to documentation complete. Now describes how to use the shell interface, the differential evolution optimizer and using M-LOOP as an MPI.
-
Documentation has been updated to explain all the added features and also how to use M-LOOP as a python API. Still needs proof reading.