Permalink
Browse files

v2.1.1 Candidate

Updated the documentation.
Candidate for new version to be released on PyPI
  • Loading branch information...
1 parent 8e7cff7 commit 58577fd25a7185ca26bb1a5d2ba09f20694cfba2 @michaelhush committed Nov 4, 2016
View
@@ -1,7 +1,7 @@
.. _api-controllers: .. _api-controllers:
controllers controllers
------------ +===========
.. automodule:: mloop.controllers .. automodule:: mloop.controllers
:members: :members:
View
@@ -1,5 +1,6 @@
.. _sec-api: .. _sec-api:
+==========
M-LOOP API M-LOOP API
========== ==========
View
@@ -1,5 +1,5 @@
interfaces interfaces
----------- +==========
.. automodule:: mloop.interfaces .. automodule:: mloop.interfaces
:members: :members:
View
@@ -1,5 +1,5 @@
launchers launchers
---------- +=========
.. automodule:: mloop.launchers .. automodule:: mloop.launchers
:members: :members:
View
@@ -1,7 +1,7 @@
.. _api-learners: .. _api-learners:
learners learners
---------- +========
.. automodule:: mloop.learners .. automodule:: mloop.learners
:members: :members:
View
@@ -1,4 +1,4 @@
mloop mloop
------ +=====
.. automodule:: mloop .. automodule:: mloop
View
@@ -1,5 +1,5 @@
testing testing
-------- +=======
.. automodule:: mloop.testing .. automodule:: mloop.testing
:members: :members:
View
@@ -1,5 +1,5 @@
utilities utilities
---------- +=========
.. automodule:: mloop.utilities .. automodule:: mloop.utilities
:members: :members:
@@ -1,5 +1,5 @@
visualizations visualizations
--------------- +==============
.. automodule:: mloop.visualizations .. automodule:: mloop.visualizations
:members: :members:
View
@@ -68,15 +68,19 @@ You can add comments to your file using #, everything past # will be ignored. Ex
num_params = 2 #number of parameters num_params = 2 #number of parameters
min_boundary = [-1,-1] #minimum boundary min_boundary = [-1,-1] #minimum boundary
max_boundary = [1,1] #maximum boundary max_boundary = [1,1] #maximum boundary
+ first_params = [0.5,0.5] #first parameters to try
+ trust_region = 0.4 #maximum % move distance from best params
#Halting conditions #Halting conditions
max_num_runs = 1000 #maximum number of runs max_num_runs = 1000 #maximum number of runs
max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters
target_cost = 0.01 #optimization halts when a cost below this target is found target_cost = 0.01 #optimization halts when a cost below this target is found
+
+ #Learner options
+ cost_has_noise = True #whether the cost are corrupted by noise or not
- #Learner specific options + #Timing options
- first_params = [0.5,0.5] #first parameters to try + no_delay = True #wait for learner to make generate new parameters or use training algorithms
- trust_region = 0.4 #maximum % move distance from best params
#File format options #File format options
interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat*
@@ -86,7 +90,7 @@ You can add comments to your file using #, everything past # will be ignored. Ex
#Visualizations #Visualizations
visualizations = True visualizations = True
-We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also describe a few of the most commonly used extra options. +We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also described a few of the most commonly used extra options.
Parameter settings Parameter settings
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@@ -99,6 +103,10 @@ The number of parameters and their limits is defined with three keywords::
num_params defines the number of parameters, min_boundary defines the minimum value each of the parameters can take and max_boundary defines the maximum value each parameter can take. Here there are two value which each must be between -1 and 1. num_params defines the number of parameters, min_boundary defines the minimum value each of the parameters can take and max_boundary defines the maximum value each parameter can take. Here there are two value which each must be between -1 and 1.
+first_parameters defines the first parameters the learner will try. You only need to set this if you have a safe set of parameters you want the experiment to start with. Just delete this keyword if any set of parameters in the boundaries will work.
+
+trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. This is only needed if your experiment produces bad results when the parameters are changes significantly between runs. Simply delete this keyword if your experiment works with any set of parameters within the boundaries.
+
Halting conditions Halting conditions
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@@ -107,6 +115,8 @@ The halting conditions define when the simulation will stop. We present three op
max_num_runs = 100 max_num_runs = 100
max_num_runs_without_better_params = 10 max_num_runs_without_better_params = 10
target_cost = 0.1 target_cost = 0.1
+ first_params = [0.5,0.5]
+ trust_region = 0.4
max_num_runs is the maximum number of runs that the optimization algorithm is allowed to run. max_num_runs_without_better_params is the maximum number of runs allowed before a lower cost and better parameters is found. Finally, when target_cost is set, if a run produces a cost that is less than this value the optimization process will stop. max_num_runs is the maximum number of runs that the optimization algorithm is allowed to run. max_num_runs_without_better_params is the maximum number of runs allowed before a lower cost and better parameters is found. Finally, when target_cost is set, if a run produces a cost that is less than this value the optimization process will stop.
@@ -119,19 +129,23 @@ If you do not want one of the halting conditions, simply delete it from your fil
max_num_runs_without_better_params = 10 max_num_runs_without_better_params = 10
-Learner specific options +Learner Options
-~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~
-There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we consider just a couple of the most commonly used ones. M-LOOP has been designed to find an optimum quickly with no custom configuration as long as the experiment is able to provide a cost for every parameter it provides. +There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we just present a common one::
-However if your experiment will fail to work if there are sudden and significant changes to your parameters you may need to set the following options:: + cost_has_noise = True
+
+If the cost you provide has noise in it, meaning your the cost you calculate would fluctuate if you did multiple experiments with the same parameters, then set this flag to True. If the costs your provide have no noise then set this flag to False. M-LOOP will automatically determine if the costs have noise in them or not, so if you are unsure, just delete this keyword and it will use the default value of True.
- first_parameters = [0.5,0.5] +Timing options
- trust_region = 0.4 +~~~~~~~~~~~~~~
-first_parameters defines the first parameters the learner will try. trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. +M-LOOP learns how the experiment works by fitting the parameters and costs using a gaussian process. This learning process can take some time. If M-LOOP is asked for new parameters before it has time to generate a new prediction, it will use the training algorithm to provide a new set of parameters to test. This allows for an experiment to be run while the learner is still thinking. The training algorithm by default is differential evolution, this algorithm is also used to do the first initial set of experiments which are then used to train M-LOOP. If you would prefer M-LOOP waits for the learner to come up with its best prediction before running another experiment you can change this behavior with the option::
-If you experiment reliably produces costs for any parameter set you will not need these settings and you can just delete them. + no_delay = True
+
+Set no_delay to true to ensure there is no pauses between experiments and set it to false if you to give M-LOOP to have the time to come up with its most informed choice. Sometimes doing fewer more intelligent experiments will lead to an optimal quicker than many quick unintelligent experiments. You can delete the keyword if you are unsure and it will default to True.
File format options File format options
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
@@ -8,15 +8,19 @@ interface_type = 'file'
num_params = 2 #number of parameters num_params = 2 #number of parameters
min_boundary = [-1,-1] #minimum boundary min_boundary = [-1,-1] #minimum boundary
max_boundary = [1,1] #maximum boundary max_boundary = [1,1] #maximum boundary
+first_params = [0.5,0.5] #first parameters to try
+trust_region = 0.4 #maximum % move distance from best params
#Halting conditions #Halting conditions
max_num_runs = 1000 #maximum number of runs max_num_runs = 1000 #maximum number of runs
max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters
target_cost = 0.01 #optimization halts when a cost below this target is found target_cost = 0.01 #optimization halts when a cost below this target is found
-#Learner specific options +#Learner options
-first_params = [0.5,0.5] #first parameters to try +cost_has_noise = True #whether the cost are corrupted by noise or not
-trust_region = 0.4 #maximum % move distance from best params +
+#Timing options
+no_delay = True #wait for learner to make generate new parameters or use training algorithms
#File format options #File format options
interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat*
View
@@ -12,5 +12,5 @@
import os import os
-__version__= "2.1.0" +__version__= "2.1.1"
__all__ = ['controllers','interfaces','launchers','learners','testing','utilities','visualizations','cmd'] __all__ = ['controllers','interfaces','launchers','learners','testing','utilities','visualizations','cmd']
View
@@ -39,7 +39,7 @@ def main():
license = 'MIT', license = 'MIT',
keywords = 'automated machine learning optimization optimisation science experiment quantum', keywords = 'automated machine learning optimization optimisation science experiment quantum',
url = 'https://github.com/michaelhush/M-LOOP/', url = 'https://github.com/michaelhush/M-LOOP/',
- download_url = 'https://github.com/michaelhush/M-LOOP/tarball/v2.1.0', + download_url = 'https://github.com/michaelhush/M-LOOP/tarball/v2.1.1',
classifiers = ['Development Status :: 2 - Pre-Alpha', classifiers = ['Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Science/Research', 'Intended Audience :: Science/Research',

0 comments on commit 58577fd

Please sign in to comment.