Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Get NN working a bit better on the tests #16

Merged
merged 25 commits into from Dec 9, 2016

Conversation

Projects
None yet
2 participants
Collaborator

charmasaur commented Dec 9, 2016

I think the biggest problem was that the scipy minimizer wasn't actually doing any minimizing, I think because the landscape looked too flat for it (so it was exiting after 0 iterations). The consequence of this is that the neural network would only ever suggest either random new parameters or the current best-known parameters. Switching to provide the actual gradient via a function seems to have fixed this problem, and has improved the performance a lot -- now the net will suggest better parameters if it thinks it can.

I also added a bit of regularisation and improved the Adam learning rate. I'm a bit sad that this made a big difference, since I'd hoped we wouldn't need to tune those parameters, but we can investigate that more later.

Finally, I added an extra visualisation that shows the estimated NN cost surface (when there are two parameters).

michaelhush and others added some commits Nov 24, 2016

@michaelhush michaelhush Added a shell for the nerual net
Added a controller and learner for the neural net.

Also added a new class MachineLearnerController which GaussianProcess
and NeuralNet both inherit from.

I broke the visualizations for GPs in this update. But all the tests
work.
ecffda8
@charmasaur charmasaur Fix some whitespace errors
Git complains to me about them when I touch nearby lines, so I figured
it was easier just to fix them.
5f48778
@charmasaur charmasaur Fix some minor controller documentation errors 326f98b
@charmasaur charmasaur Tweaks to NN learner shell 635a5f7
@charmasaur charmasaur Remove unnecessary uncertainty stuff from NNL 6a6f663
@michaelhush michaelhush Added visualization introduced bug
Visualizations now work for NN and GP learners.

Mysterious bug has appeared in GP. The scikit-learn stops providing
uncertainty predictions after being fit for a certain number of times.

Commiting so I can change branch and investigate.
97d5b23
@michaelhush michaelhush NerualNet ready for actually net
There appears to be some issues with multiprocessing and gaussian
process but only on MacOS, and possibly just my machine. So I’ve
removed all the testing statements I had in the previous commit.

Branch should be ready now to integrate in a genuine NN.
e8a8715
@charmasaur charmasaur Fix some NN typos 2efd317
@charmasaur charmasaur Basic NN learner implementation
I've pulled the actual network logic out into a new class, to
keep the TF stuff separate from everything else and to keep a
clear separation between what's modelling the landscape and
what's doing prediction.
d5c5749
@charmasaur charmasaur Fix number_of_controllers definition d7b1fca
@charmasaur charmasaur More NNController tidying/tweaking 2126150
@charmasaur charmasaur Remove scaler from NNController d78a661
@charmasaur charmasaur Tidying/logging for NN impl 34b504b
@charmasaur charmasaur Fix importing/creation of NN impl
We need to specify nnlearner as a package. More subtly, because of TF
we can only run NNI in the same process in which it's created. This
means we need to wait until the run() method of the learner is called
before constructing the impl.
9224be5
@charmasaur charmasaur Merge branch 'NeuralNetA' of https://github.com/michaelhush/M-LOOP in…
…to NeuralNetA

Conflicts:
	mloop/controllers.py
	mloop/learners.py
f76c9b2
@charmasaur charmasaur Pull NNI construction into create_neural_net be3c8a5
@charmasaur charmasaur Dumb implementation of predict_costs array version 3a46a17
@charmasaur charmasaur Set new_params_event in MLC after getting the cost
When generation_num=1, if the new_params_event is set first then the
learner will try to get the cost when the queue is empty, causing an
exception.
89f1e1a
@charmasaur charmasaur Add (trivial) scaler back to NNL 3e4b3df
@charmasaur charmasaur Don't do one last train in order to predict minima
at the end. This was causing an exception to be thrown when trying to
get costs from the queue.
f22c979
@michaelhush michaelhush Merge pull request #15 from charmasaur/NeuralNetA
Adding NN from charmasaur
82fa70a
@charmasaur charmasaur Tweak some NNI params to perform better on the test e30906a
@charmasaur charmasaur Still print predicted_best_cost even when predicted_best_uncertainty …
…isn't set
e6d371a
@charmasaur charmasaur Use TF gradient when minimizing NN cost function estimate e6e83e8
@charmasaur charmasaur Plot NN surface when there are 2 params 1900587

@charmasaur charmasaur merged commit df56ca1 into michaelhush:master Dec 9, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment