Permalink
...
Comparing changes
Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also .
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also .
Choose a Base Repository
michaelhush/M-LOOP
AIAdventures/M-LOOP
AgentJay/M-LOOP
CadeLaRen/M-LOOP
Davidjohanneslars/M-LOOP
PUSH2DREAM/M-LOOP
Python3pkg/M-LOOP
agultiga/M-LOOP
batrobinyuen/M-LOOP
carmelom/M-LOOP
charmasaur/M-LOOP
ianblenke/M-LOOP
jsonbao/M-LOOP
kel85uk/M-LOOP
ml-lab/M-LOOP
psenger/M-LOOP
runngezhang/M-LOOP
smartmanp/M-LOOP
tadashi-nakamura/M-LOOP
tomzhang/M-LOOP
trietnm2/M-LOOP
yangchenzhong/M-LOOP
zhao9jack/M-LOOP
Nothing to show
Choose a Head Repository
michaelhush/M-LOOP
AIAdventures/M-LOOP
AgentJay/M-LOOP
CadeLaRen/M-LOOP
Davidjohanneslars/M-LOOP
PUSH2DREAM/M-LOOP
Python3pkg/M-LOOP
agultiga/M-LOOP
batrobinyuen/M-LOOP
carmelom/M-LOOP
charmasaur/M-LOOP
ianblenke/M-LOOP
jsonbao/M-LOOP
kel85uk/M-LOOP
ml-lab/M-LOOP
psenger/M-LOOP
runngezhang/M-LOOP
smartmanp/M-LOOP
tadashi-nakamura/M-LOOP
tomzhang/M-LOOP
trietnm2/M-LOOP
yangchenzhong/M-LOOP
zhao9jack/M-LOOP
Nothing to show
3
contributors
Commits on Oct 21, 2016
|
|
charmasaur |
Tweaks to tutorials documentation
|
ea14b2f
|
Commits on Oct 22, 2016
|
|
charmasaur |
Fix setup syntax error
|
3c97b8f
|
Commits on Nov 02, 2016
|
|
michaelhush |
Merge pull request #12 from charmasaur/patch-1
Documentation and setup tweaks |
dfb5cd3
|
Commits on Nov 03, 2016
|
|
mhush |
Added additional tests for halting conditions.
Fixed bug with GP fitting data with bad runs. |
1897106
|
Commits on Nov 04, 2016
|
|
michaelhush |
Fixed halting conditions
Previously the training runs had to be completed before M-LOOP would halt. This lead to unintuitive behavior when the halting conditions were early on in the optimization process. M-LOOP now halts immediately when any of the halting conditions are met. |
cfa5748
|
|||
|
|
michaelhush |
Merge pull request #14 from michaelhush/fixbad
Fixed halting conditions and bad flags |
8e7cff7
|
|||
|
|
michaelhush |
v2.1.1 Candidate
Updated the documentation. Candidate for new version to be released on PyPI |
58577fd
|
Commits on Nov 24, 2016
|
|
michaelhush |
Update to test and utilities
Added some updates to docstrings and test unit parameters. |
baa5074
|
|||
|
|
michaelhush |
Added a shell for the nerual net
Added a controller and learner for the neural net. Also added a new class MachineLearnerController which GaussianProcess and NeuralNet both inherit from. I broke the visualizations for GPs in this update. But all the tests work. |
ecffda8
|
Commits on Nov 25, 2016
|
|
charmasaur |
Fix some whitespace errors
Git complains to me about them when I touch nearby lines, so I figured it was easier just to fix them. |
5f48778
|
|||
|
|
charmasaur |
Fix some minor controller documentation errors
|
326f98b
|
|||
|
|
charmasaur |
Tweaks to NN learner shell
|
635a5f7
|
|||
|
|
charmasaur |
Remove unnecessary uncertainty stuff from NNL
|
6a6f663
|
Commits on Nov 30, 2016
|
|
michaelhush |
Added visualization introduced bug
Visualizations now work for NN and GP learners. Mysterious bug has appeared in GP. The scikit-learn stops providing uncertainty predictions after being fit for a certain number of times. Commiting so I can change branch and investigate. |
97d5b23
|
Commits on Dec 01, 2016
|
|
michaelhush |
NerualNet ready for actually net
There appears to be some issues with multiprocessing and gaussian process but only on MacOS, and possibly just my machine. So I’ve removed all the testing statements I had in the previous commit. Branch should be ready now to integrate in a genuine NN. |
e8a8715
|
|||
|
|
charmasaur |
Fix some NN typos
|
2efd317
|
Commits on Dec 02, 2016
|
|
charmasaur |
Basic NN learner implementation
I've pulled the actual network logic out into a new class, to keep the TF stuff separate from everything else and to keep a clear separation between what's modelling the landscape and what's doing prediction. |
d5c5749
|
|||
|
|
charmasaur |
Fix number_of_controllers definition
|
d7b1fca
|
|||
|
|
charmasaur |
More NNController tidying/tweaking
|
2126150
|
|||
|
|
charmasaur |
Remove scaler from NNController
|
d78a661
|
|||
|
|
charmasaur |
Tidying/logging for NN impl
|
34b504b
|
|||
|
|
charmasaur |
Fix importing/creation of NN impl
We need to specify nnlearner as a package. More subtly, because of TF we can only run NNI in the same process in which it's created. This means we need to wait until the run() method of the learner is called before constructing the impl. |
9224be5
|
|||
|
|
charmasaur |
Merge branch 'NeuralNetA' of https://github.com/michaelhush/M-LOOP in…
…to NeuralNetA Conflicts: mloop/controllers.py mloop/learners.py |
f76c9b2
|
Commits on Dec 03, 2016
|
|
charmasaur |
Pull NNI construction into create_neural_net
|
be3c8a5
|
|||
|
|
charmasaur |
Dumb implementation of predict_costs array version
|
3a46a17
|
Commits on Dec 04, 2016
|
|
charmasaur |
Set new_params_event in MLC after getting the cost
When generation_num=1, if the new_params_event is set first then the learner will try to get the cost when the queue is empty, causing an exception. |
89f1e1a
|
|||
|
|
charmasaur |
Add (trivial) scaler back to NNL
|
3e4b3df
|
|||
|
|
charmasaur |
Don't do one last train in order to predict minima
at the end. This was causing an exception to be thrown when trying to get costs from the queue. |
f22c979
|
Commits on Dec 05, 2016
|
|
michaelhush |
Merge pull request #15 from charmasaur/NeuralNetA
Adding NN from charmasaur |
82fa70a
|
Commits on Dec 09, 2016
|
|
charmasaur |
Tweak some NNI params to perform better on the test
|
e30906a
|
|||
|
|
charmasaur |
Still print predicted_best_cost even when predicted_best_uncertainty …
…isn't set |
e6d371a
|
|||
|
|
charmasaur |
Use TF gradient when minimizing NN cost function estimate
|
e6e83e8
|
|||
|
|
charmasaur |
Plot NN surface when there are 2 params
|
1900587
|
|||
|
|
charmasaur |
Merge pull request #16 from charmasaur/NeuralNetA
Get NN working a bit better on the tests |
df56ca1
|
|||
|
|
charmasaur |
Revert "Get NN working a bit better on the tests"
|
9835e3f
|
|||
|
|
charmasaur |
Merge pull request #17 from michaelhush/revert-16-NeuralNetA
Revert "Get NN working a bit better on the tests" |
99d5c95
|
Commits on Mar 02, 2017
|
|
michaelhush |
Previous data files can now be imported
Added support for previous data files to be imported into a gaussian process learner. |
c2f6519
|
Commits on Mar 24, 2017
|
|
michaelhush |
Updated bug in visualizations
Fixed a bug where an attribute wasn’t present in the learner class. Was a problem when attempting to plot the visualizations from a file. |
47c16bf
|
Commits on Mar 29, 2017
|
|
michaelhush |
Fixed one param visualization bug and typos in documentation
When optimizing one parameter, there were some issues reimporting the saved files for the visualizations to work. This was due to the problematic corner case of zero D or one D with one element arrays in numpy. This has now been sanitized. Also fixed some critical typos in the documentation. |
3bc0374
|
Unified
Split
Showing
with
249 additions
and 113 deletions.
- +1 −1 docs/api/controllers.rst
- +1 −0 docs/api/index.rst
- +1 −1 docs/api/interfaces.rst
- +1 −1 docs/api/launchers.rst
- +1 −1 docs/api/learners.rst
- +1 −1 docs/api/mloop.rst
- +1 −1 docs/api/t_esting.rst
- +1 −1 docs/api/utilities.rst
- +1 −1 docs/api/visualizations.rst
- +1 −1 docs/interfaces.rst
- +30 −16 docs/tutorials.rst
- +1 −1 examples/shell_interface_config.txt
- +7 −3 examples/tutorial_config.txt
- +1 −1 mloop/__init__.py
- +38 −36 mloop/controllers.py
- +0 −1 mloop/launchers.py
- +32 −40 mloop/learners.py
- +45 −0 mloop/utilities.py
- +3 −4 mloop/visualizations.py
- +2 −2 setup.py
- +80 −0 tests/test_units.py
View
2
docs/api/controllers.rst
| @@ -1,7 +1,7 @@ | ||
| .. _api-controllers: | ||
| controllers | ||
| ------------ | ||
| +=========== | ||
| .. automodule:: mloop.controllers | ||
| :members: | ||
View
1
docs/api/index.rst
| @@ -1,5 +1,6 @@ | ||
| .. _sec-api: | ||
| +========== | ||
| M-LOOP API | ||
| ========== | ||
View
2
docs/api/interfaces.rst
| @@ -1,5 +1,5 @@ | ||
| interfaces | ||
| ----------- | ||
| +========== | ||
| .. automodule:: mloop.interfaces | ||
| :members: | ||
View
2
docs/api/launchers.rst
| @@ -1,5 +1,5 @@ | ||
| launchers | ||
| ---------- | ||
| +========= | ||
| .. automodule:: mloop.launchers | ||
| :members: | ||
View
2
docs/api/learners.rst
| @@ -1,7 +1,7 @@ | ||
| .. _api-learners: | ||
| learners | ||
| ---------- | ||
| +======== | ||
| .. automodule:: mloop.learners | ||
| :members: | ||
View
2
docs/api/mloop.rst
| @@ -1,4 +1,4 @@ | ||
| mloop | ||
| ------ | ||
| +===== | ||
| .. automodule:: mloop |
View
2
docs/api/t_esting.rst
| @@ -1,5 +1,5 @@ | ||
| testing | ||
| -------- | ||
| +======= | ||
| .. automodule:: mloop.testing | ||
| :members: | ||
View
2
docs/api/utilities.rst
| @@ -1,5 +1,5 @@ | ||
| utilities | ||
| ---------- | ||
| +========= | ||
| .. automodule:: mloop.utilities | ||
| :members: | ||
View
2
docs/api/visualizations.rst
| @@ -1,5 +1,5 @@ | ||
| visualizations | ||
| --------------- | ||
| +============== | ||
| .. automodule:: mloop.visualizations | ||
| :members: | ||
View
2
docs/interfaces.rst
| @@ -54,7 +54,7 @@ Shell interface | ||
| The shell interface is used when experiments can be run from a command in a shell. M-LOOP will still need to be configured and executed in the same manner described for a file interface as describe in :ref:`tutorial <sec-standard-experiment>`. The only difference is how M-LOOP starts the experiment and reads data. To use this interface you must include the following options:: | ||
| - interface='shell' | ||
| + interface_type='shell' | ||
| command='./run_exp' | ||
| params_args_type='direct' | ||
View
46
docs/tutorials.rst
| @@ -11,7 +11,7 @@ There are two different approaches to using M-LOOP: | ||
| 1. You can execute M-LOOP from a command line (or shell) and configure it using a text file. | ||
| 2. You can use M-LOOP as a :ref:`python API <sec-api>`. | ||
| -If you have a standard experiment, that is operated by LabVIEW, Simulink or some other method, then your should use option 1 and follow the :ref:` first tutorial <sec-standard-experiment>`. If your experiment is operated using python, you should consider using option 2 as it will give you more flexibility and control, in which case, look at the :ref:`second tutorial <sec-python-experiment>`. | ||
| +If you have a standard experiment, that is operated by LabVIEW, Simulink or some other method, then you should use option 1 and follow the :ref:`first tutorial <sec-standard-experiment>`. If your experiment is operated using python, you should consider using option 2 as it will give you more flexibility and control, in which case, look at the :ref:`second tutorial <sec-python-experiment>`. | ||
| .. _sec-standard-experiment: | ||
| @@ -31,7 +31,7 @@ There are three stages: | ||
| M-LOOP | ||
| - M-LOOP first looks for the configuration file *exp_input.txt*, which contains options like the number of parameters and their limits, in the folder it is executed, then starts the optimization process. | ||
| + M-LOOP first looks for the configuration file *exp_config.txt*, which contains options like the number of parameters and their limits, in the folder it is executed, then starts the optimization process. | ||
| 2. M-LOOP controls and optimizes the experiment by exchanging files written to disk. M-LOOP produces a file called *exp_input.txt* which contains a variable params with the next parameters to be run by the experiment. The experiment is expected to run an experiment with these parameters and measure the resultant cost. The experiment should then write the file *exp_output.txt* which contains at least the variable cost which quantifies the performance of that experimental run, and optionally, the variables uncer (for uncertainty) and bad (if the run failed). This process is repeated many times until the halting condition is met. | ||
| @@ -68,15 +68,19 @@ You can add comments to your file using #, everything past # will be ignored. Ex | ||
| num_params = 2 #number of parameters | ||
| min_boundary = [-1,-1] #minimum boundary | ||
| max_boundary = [1,1] #maximum boundary | ||
| + first_params = [0.5,0.5] #first parameters to try | ||
| + trust_region = 0.4 #maximum % move distance from best params | ||
| #Halting conditions | ||
| max_num_runs = 1000 #maximum number of runs | ||
| max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters | ||
| target_cost = 0.01 #optimization halts when a cost below this target is found | ||
| + | ||
| + #Learner options | ||
| + cost_has_noise = True #whether the cost are corrupted by noise or not | ||
| - #Learner specific options | ||
| - first_params = [0.5,0.5] #first parameters to try | ||
| - trust_region = 0.4 #maximum % move distance from best params | ||
| + #Timing options | ||
| + no_delay = True #wait for learner to make generate new parameters or use training algorithms | ||
| #File format options | ||
| interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* | ||
| @@ -86,7 +90,7 @@ You can add comments to your file using #, everything past # will be ignored. Ex | ||
| #Visualizations | ||
| visualizations = True | ||
| -We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also describe a few of the most commonly used extra options. | ||
| +We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also described a few of the most commonly used extra options. | ||
| Parameter settings | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| @@ -99,6 +103,10 @@ The number of parameters and their limits is defined with three keywords:: | ||
| num_params defines the number of parameters, min_boundary defines the minimum value each of the parameters can take and max_boundary defines the maximum value each parameter can take. Here there are two value which each must be between -1 and 1. | ||
| +first_parameters defines the first parameters the learner will try. You only need to set this if you have a safe set of parameters you want the experiment to start with. Just delete this keyword if any set of parameters in the boundaries will work. | ||
| + | ||
| +trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. This is only needed if your experiment produces bad results when the parameters are changes significantly between runs. Simply delete this keyword if your experiment works with any set of parameters within the boundaries. | ||
| + | ||
| Halting conditions | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| @@ -107,6 +115,8 @@ The halting conditions define when the simulation will stop. We present three op | ||
| max_num_runs = 100 | ||
| max_num_runs_without_better_params = 10 | ||
| target_cost = 0.1 | ||
| + first_params = [0.5,0.5] | ||
| + trust_region = 0.4 | ||
| max_num_runs is the maximum number of runs that the optimization algorithm is allowed to run. max_num_runs_without_better_params is the maximum number of runs allowed before a lower cost and better parameters is found. Finally, when target_cost is set, if a run produces a cost that is less than this value the optimization process will stop. | ||
| @@ -119,19 +129,23 @@ If you do not want one of the halting conditions, simply delete it from your fil | ||
| max_num_runs_without_better_params = 10 | ||
| -Learner specific options | ||
| -~~~~~~~~~~~~~~~~~~~~~~~~ | ||
| +Learner Options | ||
| +~~~~~~~~~~~~~~~ | ||
| -There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we consider just a couple of the most commonly used ones. M-LOOP has been designed to find an optimum quickly with no custom configuration as long as the experiment is able to provide a cost for every parameter it provides. | ||
| +There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we just present a common one:: | ||
| -However if your experiment will fail to work if there are sudden and significant changes to your parameters you may need to set the following options:: | ||
| + cost_has_noise = True | ||
| + | ||
| +If the cost you provide has noise in it, meaning your the cost you calculate would fluctuate if you did multiple experiments with the same parameters, then set this flag to True. If the costs your provide have no noise then set this flag to False. M-LOOP will automatically determine if the costs have noise in them or not, so if you are unsure, just delete this keyword and it will use the default value of True. | ||
| - first_parameters = [0.5,0.5] | ||
| - trust_region = 0.4 | ||
| +Timing options | ||
| +~~~~~~~~~~~~~~ | ||
| -first_parameters defines the first parameters the learner will try. trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. | ||
| +M-LOOP learns how the experiment works by fitting the parameters and costs using a gaussian process. This learning process can take some time. If M-LOOP is asked for new parameters before it has time to generate a new prediction, it will use the training algorithm to provide a new set of parameters to test. This allows for an experiment to be run while the learner is still thinking. The training algorithm by default is differential evolution, this algorithm is also used to do the first initial set of experiments which are then used to train M-LOOP. If you would prefer M-LOOP waits for the learner to come up with its best prediction before running another experiment you can change this behavior with the option:: | ||
| -If you experiment reliably produces costs for any parameter set you will not need these settings and you can just delete them. | ||
| + no_delay = True | ||
| + | ||
| +Set no_delay to true to ensure there is no pauses between experiments and set it to false if you to give M-LOOP to have the time to come up with its most informed choice. Sometimes doing fewer more intelligent experiments will lead to an optimal quicker than many quick unintelligent experiments. You can delete the keyword if you are unsure and it will default to True. | ||
| File format options | ||
| ~~~~~~~~~~~~~~~~~~~ | ||
| @@ -178,7 +192,7 @@ When writing the file *exp_output.txt* there are three keywords and values you c | ||
| cost refers to the cost calculated from the experimental data. uncer, is optional, and refers to the uncertainty in the cost measurement made. Note, M-LOOP by default assumes there is some noise corrupting costs, which is fitted and compensated for. Hence, if there is some noise in your costs which you are unable to predict from a single measurement, do not worry, you do not have to estimate uncer, you can just leave it out. Lastly bad can be used to indicate an experiment failed and was not able to produce a cost. If the experiment worked set bad = false and if it failed set bad = true. | ||
| -Note you do not have to include all of the keywords, you must provide at least a cost or the bad keyword set to false. For example a successful run can simply be:: | ||
| +Note you do not have to include all of the keywords, you must provide at least a cost or the bad keyword set to true. For example a successful run can simply be:: | ||
| cost = 0.3 | ||
| @@ -219,7 +233,7 @@ M-LOOP, by default, will produce a set of visualizations. These plots show the o | ||
| Python controlled experiment | ||
| ============================ | ||
| -If you have an experiment that is already under python control you can use M-LOOP as an API. Below we go over the example python script *python_controlled_experiment.py* you should also read over the :ref:` first tutorial <sec-standard-experiment>` to get a general idea of how M-LOOP works. | ||
| +If you have an experiment that is already under python control you can use M-LOOP as an API. Below we go over the example python script *python_controlled_experiment.py* you should also read over the :ref:`first tutorial <sec-standard-experiment>` to get a general idea of how M-LOOP works. | ||
| When integrating M-LOOP into your laboratory remember that it will be controlling you experiment, not vice versa. Hence, at the top level of your python script you will execute M-LOOP which will then call on your experiment when needed. Your experiment will not be making calls of M-LOOP. | ||
View
2
examples/shell_interface_config.txt
| @@ -3,4 +3,4 @@ | ||
| interface_type = 'shell' #The type of interface | ||
| command = 'python shell_script.py' #The command for the command line to run the experiment to get a cost from the parameters | ||
| -params_args_type = 'direct' #The format of the parameters when providing them on the command line. 'direct' simply appends them, e.g. python CLIscript.py 7 2 1, 'named' names each parameter, e.g. python CLIscript.py --param1 7 --param2 2 --param3 1 | ||
| +params_args_type = 'direct' #The format of the parameters when providing them on the command line. 'direct' simply appends them, e.g. python shell_script.py 7 2 1, 'named' names each parameter, e.g. python shell_script.py --param1 7 --param2 2 --param3 1 | ||
View
10
examples/tutorial_config.txt
| @@ -8,15 +8,19 @@ interface_type = 'file' | ||
| num_params = 2 #number of parameters | ||
| min_boundary = [-1,-1] #minimum boundary | ||
| max_boundary = [1,1] #maximum boundary | ||
| +first_params = [0.5,0.5] #first parameters to try | ||
| +trust_region = 0.4 #maximum % move distance from best params | ||
| #Halting conditions | ||
| max_num_runs = 1000 #maximum number of runs | ||
| max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters | ||
| target_cost = 0.01 #optimization halts when a cost below this target is found | ||
| -#Learner specific options | ||
| -first_params = [0.5,0.5] #first parameters to try | ||
| -trust_region = 0.4 #maximum % move distance from best params | ||
| +#Learner options | ||
| +cost_has_noise = True #whether the cost are corrupted by noise or not | ||
| + | ||
| +#Timing options | ||
| +no_delay = True #wait for learner to make generate new parameters or use training algorithms | ||
| #File format options | ||
| interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* | ||
View
2
mloop/__init__.py
| @@ -12,5 +12,5 @@ | ||
| import os | ||
| -__version__= "2.1.0" | ||
| +__version__= "2.1.1" | ||
| __all__ = ['controllers','interfaces','launchers','learners','testing','utilities','visualizations','cmd'] | ||
Oops, something went wrong.