diff --git a/docs/contributing.rst b/docs/contributing.rst index e7f98a2..984e6f0 100644 --- a/docs/contributing.rst +++ b/docs/contributing.rst @@ -5,7 +5,7 @@ Contributing If you use M-LOOP please consider contributing to the project. There are many quick and easy ways to help out. -- If you use M-LOOP be sure to cite paper where it first used: `'Fast machine-learning online optimization of ultra-cold-atom experiments', Sci Rep 6, 25890 (2016) `_. +- If you use M-LOOP be sure to cite the paper where it first used: `'Fast machine-learning online optimization of ultra-cold-atom experiments', Sci Rep 6, 25890 (2016) `_. - Star and watch the `M-LOOP github `_. - Make a suggestion on what features you would like added, or report an issue, on the `github `_ or by `email `_. - Contribute your own code to the `M-LOOP github `_, this could be the interface you designed, more options or a completely new solver. diff --git a/docs/install.rst b/docs/install.rst index 142a56e..f74647c 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -2,13 +2,15 @@ Installation ============ -M-LOOP is available on PyPI and can be installed with your favorite package manager. However, we currently recommend you install from the source code to ensure you have the latest improvements and bug fixes. +M-LOOP is available on PyPI and can be installed with your favorite package manager. Simply search for 'M-LOOP' and install. For those new to python, we also provide a more comprehensive list of instructions on how to install below. The installation process involves three steps. 1. Get a Python distribution with the standard scientific packages. We recommend installing :ref:`sec-anaconda`. -2. Install the development version of :ref:`sec-M-LOOP`. -3. :ref:`Test` your M-LOOP install. +2. Install the latest release of :ref:`sec-M-LOOP`. +3. (Optional) :ref:`Test` your M-LOOP install. + +If you are having any trouble with the installation you may need to check your the :ref:`package dependencies` have been correctly installed. If you ares still having trouble, you can report an issue on the `Link github `_. .. _sec-anaconda: @@ -20,13 +22,30 @@ https://www.continuum.io/downloads Follow the installation instructions they provide. -M-LOOP is targeted at python 3.\* but also supports 2.7. Please use python 3.\* if you do not have a reason to use 2.7, see :ref:`sec-py3vpy2` for details. +M-LOOP is targeted at python 3 but also supports 2. Please use python 3 if you do not have a reason to use 2, see :ref:`sec-py3vpy2` for details. .. _sec-m-loop: M-LOOP ------ -M-LOOP can be installed from the source code with three commands:: + +You have two options when installing M-LOOP, you can get the last stable release using pip or you can install from source to get the latest features and bug fixes. + +Installing with pip +^^^^^^^^^^^^^^^^^^^ + +M-LOOP can be installed with pip with a single command:: + + pip install M-LOOP + +If you are using linux or MacOS you may need admin privileges to run the command. To update M-LOOP to the latest version use:: + + pip install M-LOOP --upgrade + +Installing from source +^^^^^^^^^^^^^^^^^^^^^^ + +M-LOOP can be installed from the latest source code with three commands:: git clone git://github.com/michaelhush/M-LOOP.git cd ./M-LOOP @@ -42,19 +61,56 @@ in the M-LOOP directory. .. _sec-Testing: -Test Installation ------------------ +Testing +------- -To test your M-LOOP installation use the command:: +If you have installed from source, to test you installation use the command:: python setup.py test In the M-LOOP source code directory. The tests should take around five minutes to complete. If you find a error please consider :ref:`sec-contributing` to the project and report a bug on the `github `_. +If you installed M-LOOP using pip, you will not need to test your installation. + +.. _sec-dependencies: + +Dependencies +------------ +M-LOOP requires the following packages to run correctly. + +============ ======= +Package Version +============ ======= +docutils >=0.3 +matplotlib >=1.5 +numpy >=1.11 +pip >=7.0 +pytest >=2.9 +setuptools >=26 +scikit-learn >=0.18 +scipy >=0.17 +============ ======= + +These packages should be automatically installed by pip or the script setup.py when you install M-LOOP. + +However if you are using Anaconda some packages that are managed by the conda command may not be correctly updated, even if your installation passes all the tests. In this case you will have to update these packages yourself manually. You can check what packages you have installed and their version with the command:: + + conda list + +To install a package that is missing, say for example pytest, use the command:: + + conda install pytest + +To update a package to the latest version, say for example scikit-learn, use the command:: + + conda update scikit-learn + +Once you install and update all the required packages with conda M-LOOP should run correctly. + Documentation ------------- -If you would also like a local copy of the documentation enter the docs folder and use the command:: +The latest documentation will always be available here online. If you would also like a local copy of the documentation enter the docs folder and use the command:: make html @@ -65,6 +121,6 @@ Which will generate the documentation in docs/_build/html. Python 3 vs 2 ------------- -M-LOOP is developed in python 3.\* and it gets the best performance in this environment. This is primarily because other packages that M-LOOP uses, like numpy, run fastest in python 3. The tests typically take about 20% longer to complete in python 2 than 3. +M-LOOP is developed in python 3 and it gets the best performance in this environment. This is primarily because other packages that M-LOOP uses, like numpy, run fastest in python 3. The tests typically take about 20% longer to complete in python 2 than 3. -If you have a specific reason to stay in a python 2.7 environment, you may use other packages which are not python 3 compatible, then you can still use M-LOOP without upgrading to 3.\*. However, if you do not have a specific reason to stay with python 2, it is highly recommended you use the latest python 3.\* package. +If you have a specific reason to stay in a python 2 environment (you may use other packages which are not python 3 compatible) then you can still use M-LOOP without upgrading to 3. However, if you do not have a specific reason to stay with python 2, it is highly recommended you use the latest python 3 package. diff --git a/examples/differential_evolution_complete_config.txt b/examples/differential_evolution_complete_config.txt new file mode 100644 index 0000000..88a8547 --- /dev/null +++ b/examples/differential_evolution_complete_config.txt @@ -0,0 +1,19 @@ +#Differential Evolution Complete Options +#--------------------------------------- + +#General options +max_num_runs = 500 #number of planned runs +target_cost = 0.1 #cost to beat + +#Differential evolution controller options +controller_type = 'differential_evolution' +num_params = 2 #number of parameters +min_boundary = [-1.2,-2] #minimum boundary +max_boundary = [10.0,4] #maximum boundary +trust_region = [3.2,3.1] #maximum move distance from best params +first_params = None #first parameters to try if None a random set of parameters is chosen +evolution_strategy='best2' #evolution strategy can be 'best1', 'best2', 'rand1' and 'rand2'. Best uses the best point, rand uses a random one, the number indicates the number of directions added. +population_size=10 #a multiplier for the population size of a generation +mutation_scale=(0.4, 1.1) #the minimum and maximum value for the mutation scale factor. Each generation is randomly selected from this. Each value must be between 0 and 2. +cross_over_probability=0.8 #the probability a parameter will be resampled during a mutation in a new generation +restart_tolerance=0.02 #the fraction the standard deviation in the costs of the population must reduce from the initial sample, before the search is restarted. \ No newline at end of file diff --git a/examples/differential_evolution_simple_config.txt b/examples/differential_evolution_simple_config.txt new file mode 100644 index 0000000..d4615a0 --- /dev/null +++ b/examples/differential_evolution_simple_config.txt @@ -0,0 +1,15 @@ +#Differential Evolution Basic Options +#------------------------------------ + +#General options +max_num_runs = 500 #number of planned runs +target_cost = 0.1 #cost to beat + +#Differential evolution controller options +controller_type = 'differential_evolution' +num_params = 1 #number of parameters +min_boundary = [-4.8] #minimum boundary +max_boundary = [10.0] #maximum boundary +trust_region = 0.6 #maximum % move distance from best params +first_params = [5.3] #first parameters to try + diff --git a/mloop/controllers.py b/mloop/controllers.py index bb1ebae..e4b6964 100644 --- a/mloop/controllers.py +++ b/mloop/controllers.py @@ -11,8 +11,8 @@ import logging import os -controller_dict = {'random':1,'nelder_mead':2,'gaussian_process':3} -number_of_controllers = 3 +controller_dict = {'random':1,'nelder_mead':2,'gaussian_process':3,'differential_evolution':4} +number_of_controllers = 4 default_controller_archive_filename = 'controller_archive' default_controller_archive_file_type = 'txt' @@ -47,6 +47,8 @@ def create_controller(interface, controller_type = str(controller_type) if controller_type=='gaussian_process': controller = GaussianProcessController(interface, **controller_config_dict) + elif controller_type=='differential_evolution': + controller = DifferentialEvolutionController(interface, **controller_config_dict) elif controller_type=='nelder_mead': controller = NelderMeadController(interface, **controller_config_dict) elif controller_type=='random': @@ -489,6 +491,37 @@ def _next_params(self): self.learner_costs_queue.put(cost) return self.learner_params_queue.get() +class DifferentialEvolutionController(Controller): + ''' + Controller for the differential evolution learner. + + Args: + params_out_queue (queue): Queue for parameters to next be run by experiment. + costs_in_queue (queue): Queue for costs (and other details) that have been returned by experiment. + **kwargs (Optional [dict]): Dictionary of options to be passed to Controller parent class and differential evolution learner. + ''' + def __init__(self, interface, + **kwargs): + super(DifferentialEvolutionController,self).__init__(interface, **kwargs) + + self.learner = mll.DifferentialEvolutionLearner(start_datetime = self.start_datetime, + **self.remaining_kwargs) + + self._update_controller_with_learner_attributes() + self.out_type.append('differential_evolution') + + def _next_params(self): + ''' + Gets next parameters from differential evolution learner. + ''' + if self.curr_bad: + cost = float('inf') + else: + cost = self.curr_cost + self.learner_costs_queue.put(cost) + return self.learner_params_queue.get() + + class GaussianProcessController(Controller): @@ -506,7 +539,7 @@ class GaussianProcessController(Controller): ''' def __init__(self, interface, - training_type='random', + training_type='differential_evolution', num_training_runs=None, no_delay=True, num_params=None, @@ -550,9 +583,21 @@ def __init__(self, interface, num_params=num_params, min_boundary=min_boundary, max_boundary=max_boundary, - learner_archive_filename='training_learner_archive', + learner_archive_filename=None, learner_archive_file_type=learner_archive_file_type, **self.remaining_kwargs) + + elif self.training_type == 'differential_evolution': + self.learner = mll.DifferentialEvolutionLearner(start_datetime=self.start_datetime, + num_params=num_params, + min_boundary=min_boundary, + max_boundary=max_boundary, + trust_region=trust_region, + evolution_strategy='rand2', + learner_archive_filename=None, + learner_archive_file_type=learner_archive_file_type, + **self.remaining_kwargs) + else: self.log.error('Unknown training type provided to Gaussian process controller:' + repr(training_type)) @@ -601,12 +646,12 @@ def _next_params(self): ''' Gets next parameters from training learner. ''' - if self.training_type == 'nelder_mead': + if self.training_type == 'differential_evolution' or self.training_type == 'nelder_mead': #Copied from NelderMeadController - if self.curr_bad: + if self.last_training_bad: cost = float('inf') else: - cost = self.curr_cost + cost = self.last_training_cost self.learner_costs_queue.put(cost) temp = self.learner_params_queue.get() diff --git a/mloop/learners.py b/mloop/learners.py index 03fdc28..14556f2 100644 --- a/mloop/learners.py +++ b/mloop/learners.py @@ -8,6 +8,7 @@ import threading import numpy as np +import random import numpy.random as nr import scipy.optimize as so import logging @@ -259,6 +260,7 @@ class RandomLearner(Learner, threading.Thread): Keyword Args: min_boundary (Optional [array]): If set to None, overrides default learner values and sets it to a set of value 0. Default None. max_boundary (Optional [array]): If set to None overides default learner values and sets it to an array of value 1. Default None. + first_params (Optional [array]): The first parameters to test. If None will just randomly sample the initial condition. trust_region (Optional [float or array]): The trust region defines the maximum distance the learner will travel from the current best set of parameters. If None, the learner will search everywhere. If a float, this number must be between 0 and 1 and defines maximum distance the learner will venture as a percentage of the boundaries. If it is an array, it must have the same size as the number of parameters and the numbers define the maximum absolute distance that can be moved along each direction. ''' @@ -317,7 +319,6 @@ def run(self): self._shut_down() self.log.debug('Ended Random Learner') - class NelderMeadLearner(Learner, threading.Thread): ''' Nelder-Mead learner. Executes the Nelder-Mead learner algorithm and stores the needed simplex to estimate the next points. @@ -548,14 +549,305 @@ def run(self): self._shut_down() self.log.info('Ended Nelder-Mead') -def update_archive(self): + def update_archive(self): ''' Update the archive. ''' - self.archive_dict.update({'archive_type':'nelder_mead_learner', - 'simplex_parameters':self.simplex_params, + self.archive_dict.update({'simplex_parameters':self.simplex_params, 'simplex_costs':self.simplex_costs}) +class DifferentialEvolutionLearner(Learner, threading.Thread): + ''' + Adaption of the differential evolution algorithm in scipy. + + Args: + params_out_queue (queue): Queue for parameters sent to controller. + costs_in_queue (queue): Queue for costs for gaussian process. This must be tuple + end_event (event): Event to trigger end of learner. + + Keyword Args: + first_params (Optional [array]): The first parameters to test. If None will just randomly sample the initial condition. Default None. + trust_region (Optional [float or array]): The trust region defines the maximum distance the learner will travel from the current best set of parameters. If None, the learner will search everywhere. If a float, this number must be between 0 and 1 and defines maximum distance the learner will venture as a percentage of the boundaries. If it is an array, it must have the same size as the number of parameters and the numbers define the maximum absolute distance that can be moved along each direction. + evolution_strategy (Optional [string]): the differential evolution strategy to use, options are 'best1', 'best1', 'rand1' and 'rand2'. The default is 'best2'. + population_size (Optional [int]): multiplier proportional to the number of parameters in a generation. The generation population is set to population_size * parameter_num. Default 15. + mutation_scale (Optional [tuple]): The mutation scale when picking new points. Otherwise known as differential weight. When provided as a tuple (min,max) a mutation constant is picked randomly in the interval. Default (0.5,1.0). + cross_over_probability (Optional [float]): The recombination constand or crossover probability, the probability a new points will be added to the population. + restart_tolerance (Optional [float]): when the current population have a spread less than the initial tolerance, namely stdev(curr_pop) < restart_tolerance stdev(init_pop), it is likely the population is now in a minima, and so the search is started again. + + Attributes: + has_trust_region (bool): Whether the learner has a trust region. + num_population_members (int): The number of parameters in a generation. + params_generations (list): History of the parameters generations. A list of all the parameters in the population, for each generation created. + costs_generations (list): History of the costs generations. A list of all the costs in the population, for each generation created. + init_std (float): The initial standard deviation in costs of the population. Calucalted after sampling (or resampling) the initial population. + curr_std (float): The current standard devation in costs of the population. Calculated after sampling each generation. + ''' + + def __init__(self, + first_params = None, + trust_region = None, + evolution_strategy='best1', + population_size=15, + mutation_scale=(0.5, 1), + cross_over_probability=0.7, + restart_tolerance=0.01, + **kwargs): + + super(DifferentialEvolutionLearner,self).__init__(**kwargs) + + if first_params is None: + self.first_params = float('nan') + else: + self.first_params = np.array(first_params, dtype=float) + if not self.check_num_params(self.first_params): + self.log.error('first_params has the wrong number of parameters:' + repr(self.first_params)) + raise ValueError + if not self.check_in_boundary(self.first_params): + self.log.error('first_params is not in the boundary:' + repr(self.first_params)) + raise ValueError + + self._set_trust_region(trust_region) + + if evolution_strategy == 'best1': + self.mutation_func = self._best1 + elif evolution_strategy == 'best2': + self.mutation_func = self._best2 + elif evolution_strategy == 'rand1': + self.mutation_func = self._rand1 + elif evolution_strategy == 'rand2': + self.mutation_func = self._rand2 + else: + self.log.error('Please select a valid mutation strategy') + raise ValueError + + self.evolution_strategy = evolution_strategy + self.restart_tolerance = restart_tolerance + + if len(mutation_scale) == 2 and (np.any(np.array(mutation_scale) <= 2) or np.any(np.array(mutation_scale) > 0)): + self.mutation_scale = mutation_scale + else: + self.log.error('Mutation scale must be a tuple with (min,max) between 0 and 2. mutation_scale:' + repr(mutation_scale)) + raise ValueError + + if cross_over_probability <= 1 and cross_over_probability >= 0: + self.cross_over_probability = cross_over_probability + else: + self.log.error('Cross over probability must be between 0 and 1. cross_over_probability:' + repr(cross_over_probability)) + + if population_size >= 5: + self.population_size = population_size + else: + self.log.error('Population size must be greater or equal to 5:' + repr(population_size)) + + self.num_population_members = self.population_size * self.num_params + + self.first_sample = True + + self.params_generations = [] + self.costs_generations = [] + self.generation_count = 0 + + self.min_index = 0 + self.init_std = 0 + self.curr_std = 0 + + self.archive_dict.update({'archive_type':'differential_evolution', + 'evolution_strategy':self.evolution_strategy, + 'mutation_scale':self.mutation_scale, + 'cross_over_probability':self.cross_over_probability, + 'population_size':self.population_size, + 'num_population_members':self.num_population_members, + 'restart_tolerance':self.restart_tolerance, + 'first_params':self.first_params, + 'has_trust_region':self.has_trust_region, + 'trust_region':self.trust_region}) + + + def run(self): + ''' + Runs the Differential Evolution Learner. + ''' + try: + + self.generate_population() + + while not self.end_event.is_set(): + + self.next_generation() + + if self.curr_std < self.restart_tolerance * self.init_std: + self.generate_population() + + except LearnerInterrupt: + return + + def save_generation(self): + ''' + Save history of generations. + ''' + self.params_generations.append(np.copy(self.population)) + self.costs_generations.append(np.copy(self.population_costs)) + self.generation_count += 1 + + def generate_population(self): + ''' + Sample a new random set of variables + ''' + + self.population = [] + self.population_costs = [] + self.min_index = 0 + + if np.all(np.isfinite(self.first_params)) and self.first_sample: + curr_params = self.first_params + self.first_sample = False + else: + curr_params = self.min_boundary + nr.rand(self.num_params) * self.diff_boundary + + curr_cost = self.put_params_and_get_cost(curr_params) + + self.population.append(curr_params) + self.population_costs.append(curr_cost) + + for index in range(1, self.num_population_members): + + if self.has_trust_region: + temp_min = np.maximum(self.min_boundary,self.population[self.min_index] - self.trust_region) + temp_max = np.minimum(self.max_boundary,self.population[self.min_index] + self.trust_region) + curr_params = temp_min + nr.rand(self.num_params) * (temp_max - temp_min) + else: + curr_params = self.min_boundary + nr.rand(self.num_params) * self.diff_boundary + + curr_cost = self.put_params_and_get_cost(curr_params) + + self.population.append(curr_params) + self.population_costs.append(curr_cost) + + if curr_cost < self.population_costs[self.min_index]: + self.min_index = index + + self.population = np.array(self.population) + self.population_costs = np.array(self.population_costs) + + self.init_std = np.std(self.population_costs) + self.curr_std = self.init_std + + self.save_generation() + + def next_generation(self): + ''' + Evolve the population by a single generation + ''' + + self.curr_scale = nr.uniform(self.mutation_scale[0], self.mutation_scale[1]) + + for index in range(self.num_population_members): + + curr_params = self.mutate(index) + + curr_cost = self.put_params_and_get_cost(curr_params) + + if curr_cost < self.population_costs[index]: + self.population[index] = curr_params + self.population_costs[index] = curr_cost + + if curr_cost < self.population_costs[self.min_index]: + self.min_index = index + + self.curr_std = np.std(self.population_costs) + + self.save_generation() + + def mutate(self, index): + ''' + Mutate the parameters at index. + + Args: + index (int): Index of the point to be mutated. + ''' + + fill_point = nr.randint(0, self.num_params) + candidate_params = self.mutation_func(index) + crossovers = nr.rand(self.num_params) < self.cross_over_probability + crossovers[fill_point] = True + mutated_params = np.where(crossovers, candidate_params, self.population[index]) + + if self.has_trust_region: + temp_min = np.maximum(self.min_boundary,self.population[self.min_index] - self.trust_region) + temp_max = np.minimum(self.max_boundary,self.population[self.min_index] + self.trust_region) + rand_params = temp_min + nr.rand(self.num_params) * (temp_max - temp_min) + else: + rand_params = self.min_boundary + nr.rand(self.num_params) * self.diff_boundary + + projected_params = np.where(np.logical_or(mutated_params < self.min_boundary, mutated_params > self.max_boundary), rand_params, mutated_params) + + return projected_params + + def _best1(self, index): + ''' + Use best parameters and two others to generate mutation. + + Args: + index (int): Index of member to mutate. + ''' + r0, r1 = self.random_index_sample(index, 2) + return (self.population[self.min_index] + self.curr_scale *(self.population[r0] - self.population[r1])) + + def _rand1(self, index): + ''' + Use three random parameters to generate mutation. + + Args: + index (int): Index of member to mutate. + ''' + r0, r1, r2 = self.random_index_sample(index, 3) + return (self.population[r0] + self.curr_scale * (self.population[r1] - self.population[r2])) + + def _best2(self, index): + ''' + Use best parameters and four others to generate mutation. + + Args: + index (int): Index of member to mutate. + ''' + r0, r1, r2, r3 = self.random_index_sample(index, 4) + return self.population[self.min_index] + self.curr_scale * (self.population[r0] + self.population[r1] - self.population[r2] - self.population[r3]) + + def _rand2(self, index): + ''' + Use five random parameters to generate mutation. + + Args: + index (int): Index of member to mutate. + ''' + r0, r1, r2, r3, r4 = self.random_index_sample(index, 5) + return self.population[r0] + self.curr_scale * (self.population[r1] + self.population[r2] - self.population[r3] - self.population[r4]) + + def random_index_sample(self, index, num_picks): + ''' + Randomly select a num_picks of indexes, without index. + + Args: + index(int): The index that is not included + num_picks(int): The number of picks. + ''' + rand_indexes = list(range(self.num_population_members)) + rand_indexes.remove(index) + return random.sample(rand_indexes, num_picks) + + def update_archive(self): + ''' + Update the archive. + ''' + self.archive_dict.update({'params_generations':self.params_generations, + 'costs_generations':self.costs_generations, + 'population':self.population, + 'population_costs':self.population_costs, + 'init_std':self.init_std, + 'curr_std':self.curr_std, + 'generation_count':self.generation_count}) + + class GaussianProcessLearner(Learner, mp.Process): ''' @@ -1178,7 +1470,11 @@ def find_local_minima(self): self.has_local_minima = True self.log.info('Search completed') - - + + + + + + diff --git a/mloop/visualizations.py b/mloop/visualizations.py index cc9fe04..9f47743 100644 --- a/mloop/visualizations.py +++ b/mloop/visualizations.py @@ -11,12 +11,12 @@ import logging import matplotlib.pyplot as plt import matplotlib as mpl -from mloop.controllers import GaussianProcessController figure_counter = 0 cmap = plt.get_cmap('hsv') run_label = 'Run number' cost_label = 'Cost' +generation_label = 'Generation number' scale_param_label = 'Min (0) to max (1) parameters' param_label = 'Parameter' log_length_scale_label = 'Log of length scale' @@ -38,12 +38,19 @@ def show_all_default_visualizations(controller, show_plots=True): log.debug('Creating controller visualizations.') create_contoller_visualizations(controller.total_archive_filename, file_type=controller.controller_archive_file_type) - if isinstance(controller, GaussianProcessController): + + if isinstance(controller, mlc.DifferentialEvolutionController): + log.debug('Creating differential evolution visualizations.') + create_differential_evolution_learner_visualizations(controller.learner.total_archive_filename, + file_type=controller.learner.learner_archive_file_type) + + if isinstance(controller, mlc.GaussianProcessController): log.debug('Creating gaussian process visualizations.') plot_all_minima_vs_cost_flag = bool(controller.gp_learner.has_local_minima) create_gaussian_process_learner_visualizations(controller.gp_learner.total_archive_filename, file_type=controller.gp_learner.learner_archive_file_type, plot_all_minima_vs_cost=plot_all_minima_vs_cost_flag) + log.info('Showing visualizations, close all to end MLOOP.') if show_plots: plt.show() @@ -225,6 +232,111 @@ def plot_parameters_vs_cost(self): artists.append(plt.Line2D((0,1),(0,0), color=self.param_colors[ind],marker='o',linestyle='')) plt.legend(artists,[str(x) for x in range(1,self.num_params+1)], loc=legend_loc) +def create_differential_evolution_learner_visualizations(filename, + file_type='pkl', + plot_params_vs_generations=True, + plot_costs_vs_generations=True): + ''' + Runs the plots from a differential evolution learner file. + + Args: + filename (Optional [string]): Filename for the differential evolution archive. Must provide datetime or filename. Default None. + + Keyword Args: + file_type (Optional [string]): File type 'pkl' pickle, 'mat' matlab or 'txt' text. + plot_params_generations (Optional [bool]): If True plot parameters vs generations, else do not. Default True. + plot_costs_generations (Optional [bool]): If True plot costs vs generations, else do not. Default True. + ''' + visualization = DifferentialEvolutionVisualizer(filename, file_type=file_type) + if plot_params_vs_generations: + visualization.plot_params_vs_generations() + if plot_costs_vs_generations: + visualization.plot_costs_vs_generations() + +class DifferentialEvolutionVisualizer(): + ''' + DifferentialEvolutionVisualizer creates figures from a differential evolution archive. + + Args: + filename (String): Filename of the DifferentialEvolutionVisualizer archive. + + Keyword Args: + file_type (String): Can be 'mat' for matlab, 'pkl' for pickle or 'txt' for text. Default 'pkl'. + + ''' + def __init__(self, filename, + file_type ='pkl', + **kwargs): + + self.log = logging.getLogger(__name__) + + self.filename = str(filename) + self.file_type = str(file_type) + if not mlu.check_file_type_supported(self.file_type): + self.log.error('GP training file type not supported' + repr(self.file_type)) + learner_dict = mlu.get_dict_from_file(self.filename, self.file_type) + + if 'archive_type' in learner_dict and not (learner_dict['archive_type'] == 'differential_evolution'): + self.log.error('The archive appears to be the wrong type.' + repr(learner_dict['archive_type'])) + raise ValueError + self.archive_type = learner_dict['archive_type'] + + self.num_generations = int(learner_dict['generation_count']) + self.num_population_members = int(learner_dict['num_population_members']) + self.num_params = int(learner_dict['num_params']) + self.min_boundary = np.squeeze(np.array(learner_dict['min_boundary'])) + self.max_boundary = np.squeeze(np.array(learner_dict['max_boundary'])) + self.params_generations = np.array(learner_dict['params_generations']) + self.costs_generations = np.array(learner_dict['costs_generations']) + + self.finite_flag = True + self.param_scaler = lambda p: (p-self.min_boundary)/(self.max_boundary - self.min_boundary) + self.scaled_params_generations = np.array([[self.param_scaler(self.params_generations[inda,indb,:]) for indb in range(self.num_population_members)] for inda in range(self.num_generations)]) + + self.gen_numbers = np.arange(1,self.num_generations+1) + self.param_colors = _color_list_from_num_of_params(self.num_params) + self.gen_plot = np.array([np.full(self.num_population_members, ind, dtype=int) for ind in self.gen_numbers]).flatten() + + def plot_costs_vs_generations(self): + ''' + Create a plot of the costs versus run number. + ''' + if self.costs_generations.size == 0: + self.log.warning('Unable to plot DE: costs vs generations as the initial generation did not complete.') + return + + global figure_counter, cost_label, generation_label + figure_counter += 1 + plt.figure(figure_counter) + plt.plot(self.gen_plot,self.costs_generations.flatten(),marker='o',linestyle='',color='k') + plt.xlabel(generation_label) + plt.ylabel(cost_label) + plt.title('Differential evolution: Cost vs generation number.') + + def plot_params_vs_generations(self): + ''' + Create a plot of the parameters versus run number. + ''' + if self.params_generations.size == 0: + self.log.warning('Unable to plot DE: params vs generations as the initial generation did not complete.') + return + + global figure_counter, generation_label, scale_param_label, legend_loc + figure_counter += 1 + plt.figure(figure_counter) + + for ind in range(self.num_params): + plt.plot(self.gen_plot,self.params_generations[:,:,ind].flatten(),marker='o',linestyle='',color=self.param_colors[ind]) + plt.ylim((0,1)) + plt.xlabel(generation_label) + plt.ylabel(scale_param_label) + + plt.title('Differential evolution: Params vs generation number.') + artists=[] + for ind in range(self.num_params): + artists.append(plt.Line2D((0,1),(0,0), color=self.param_colors[ind],marker='o',linestyle='')) + plt.legend(artists,[str(x) for x in range(1,self.num_params+1)],loc=legend_loc) + def create_gaussian_process_learner_visualizations(filename, file_type='pkl', plot_cross_sections=True, @@ -234,7 +346,7 @@ def create_gaussian_process_learner_visualizations(filename, Runs the plots from a gaussian process learner file. Args: - filename (Optional [string]): Filename for the controller archive. Must provide datetime or filename. Default None. + filename (Optional [string]): Filename for the gaussian process archive. Must provide datetime or filename. Default None. Keyword Args: file_type (Optional [string]): File type 'pkl' pickle, 'mat' matlab or 'txt' text. diff --git a/tests/test_examples.py b/tests/test_examples.py index e9727e7..d56cbe6 100644 --- a/tests/test_examples.py +++ b/tests/test_examples.py @@ -33,6 +33,8 @@ def test_controller_config(self): def test_extras_config(self): controller = mll.launch_from_file(mlu.mloop_path+'/../examples/extras_config.txt', num_params=1, + min_boundary = [-1.0], + max_boundary = [1.0], target_cost = 0.1, interface_type = 'test', no_delay = False, @@ -42,6 +44,8 @@ def test_extras_config(self): def test_logging_config(self): controller = mll.launch_from_file(mlu.mloop_path+'/../examples/logging_config.txt', num_params=1, + min_boundary = [-1.0], + max_boundary = [1.0], target_cost = 0.1, interface_type = 'test', no_delay = False, @@ -70,6 +74,18 @@ def test_nelder_mead_complete_config(self): **self.override_dict) self.asserts_for_cost_and_params(controller) + def test_differential_evolution_simple_config(self): + controller = mll.launch_from_file(mlu.mloop_path+'/../examples/differential_evolution_simple_config.txt', + interface_type = 'test', + **self.override_dict) + self.asserts_for_cost_and_params(controller) + + def test_differential_evolution_complete_config(self): + controller = mll.launch_from_file(mlu.mloop_path+'/../examples/differential_evolution_complete_config.txt', + interface_type = 'test', + **self.override_dict) + self.asserts_for_cost_and_params(controller) + def test_gaussian_process_simple_config(self): controller = mll.launch_from_file(mlu.mloop_path+'/../examples/gaussian_process_simple_config.txt', interface_type = 'test',