Skip to content

Lagrant/SmartStart

Repository files navigation

Smart Starts: Accelerating Convergence through Uncommon Region Exploration

Introduction

Initialization significantly impacts the effectiveness of evolutionary algorithms (EAs) by influencing search trajectories and convergence. Concentrated populations risk premature convergence, while diverse populations enhance search space coverage. This makes effective initialization crucial, especially for complex, multi-modal fitness landscapes.

Traditional uniform random initialization often fails in high-dimensional spaces due to the "curse of dimensionality". While methods like Latin Hypercube Sampling (LHS) and Sobol sequences improve distribution, they still struggle with increasing dimensionality. Compositional methods like Opposition-Based Learning (OBL) improve diversity but may miss underexplored subregions.

Exploration of a $2D$ space using the Differential Evolution (DE) algorithm while optimizing the sphere function. From the initial population to the final exploration, several regions remain completely unexplored.

Our Solution: OBLESA (OBL + ESA)

This study introduces OBLESA, a hybrid initialization strategy combining Empty-Space Search Algorithm (ESA) and Opposition-Based Learning (OBL).

This figure depicts the OBLESA algorithm step-by-step in a $2D$ space bounded from $[-5, 5]$. The color of the points represents their fitness value for the sphere problem (blue indicates a low fitness value, while red indicates a high fitness value): a) Pure random population; b) Additional OBL population; c) Additional ESA population; d) Final population, filtered using the best fitness value.

  • OBL for Initial Diversity: OBL generates an initial diverse population by creating "opposite" solutions. This effectively doubles exploration without increasing population size.
  • ESA for Under-explored Region Augmentation: ESA identifies and populates under-explored regions, complementing OBL. This helps to avoid dense clusters and ensures more thorough exploration.

Diagram illustrating the inner workings of the proposed OBLESA initialization method.

ESA for Population Initialization

The (ESA) identifies sparse, under-explored regions ("empty spaces") in high-dimensional search spaces. While originally used for novel solution discovery, we apply ESA to enhance candidate solution diversity and quality in optimization initialization.

ESA operates by:

  • Randomly placing "agents" in the data space
  • Guiding agents to sparse regions using a Lennard-Jones (L-J) Potential-based approach. The L-J Potential models inter-particle forces, here determining agent movement: $F(r) = 24\frac{1}{\sigma}\left[2\left(\frac{\sigma}{r}\right)^{13} - \left(\frac{\sigma}{r}\right)^7\right]$ where $r$ is the agent-data point distance and $\sigma$ is the particle effect size. A positive force ($r < \sigma$) pushes agents away, while a negative force pulls them back.
  • Calculating agent movement direction from the resultant force of $k$ nearest data points: $\Sigma \vec{F} = \sum_{i}^{k}\vec{u_i}F(r_i) \quad \vec{d} = \frac{\Sigma \vec{F}}{||\Sigma \vec{F}||}$ where $\vec{u_i}$ is the unit vector towards the data point.

Through iterative equilibrium, agents converge to the centers of these empty spaces.

Example of the repulsion vector (green arrow) for a probe (red dot), computed from its three nearest neighbors (yellow dots).

Evaluation & Results

We evaluated OBLESA against random initialization and random + OBL initialization using 24 benchmark functions from the COCO benchmark. The initial population size was set to $100$. We tested OBLESA with two evolutionary algorithms: Differential Evolution (DE) and Enhanced Grey Wolf Optimization (EGWO). Performance was measured by the fraction of benchmark functions that reached the convergence threshold.

  1. OBLESA consistently outperforms the baselines (random and random + OBL) in most cases for both EGWO and DE across various dimensionalities
  2. The advantage of OBLESA becomes more significant in higher-dimensional spaces ($20D$ and $40D$)
  3. EGWO consistently outperforms DE across all dimensions

Requirements

All the requirments are specified in requirments.txt. Please refer to COCO to install and setup COCO benchmark.

Run OBLESA

Run the following script in a bash:

python coco_benchmarks.py -o egwo -e 500 -s 1

Replace the arguments -o, -e and -s with your own experiment settings. Explanations of the arguments can be found in coco_benchmarks.py

Experiment results

Experiment results of the paper and parameters used in the algorithms are shown in results.

Authors

License

This project is licensed under the MIT License - see the LICENSE file for details.

Copyright

This project is under the following COPYRIGHT.

About

The open source code and results of our paper Smart Starts: Accelerating Convergence through Uncommon Region Exploration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages