Skip to content

Evolutionary Generative Optimization(EvoGO) is a fully data-driven framework for black-box optimization.

Notifications You must be signed in to change notification settings

EMI-Group/evogo

Repository files navigation

EvoX Logo

🌟 Evolutionary Generative Optimization: Towards Fully Data-Driven Evolutionary Optimization via Generative Learning 🌟

EvoGO Overview

📚 Table of Contents

🔍 Overview

Evolutionary Generative Optimization(EvoGO) is a fully data-driven framework for black-box optimization. It addresses a core limitation of traditional evolutionary optimization that relies on manually designed operators. EvoGO replaces heuristic rules by learning search behavior from historical evaluations. The framework supports both JAX and PyTorch backends. EvoGO is designed to be compatible with EvoX.

✨ Key Features

🚫 Fully Data-Driven Optimization

  • Replaces operator-centric evolutionary design with learned generation to drive the search process.
  • Reduces reliance on manual heuristics and extensive parameter tuning.

🎯 Optimization-Goal Alignment

  • Introduces learning objectives explicitly tailored for optimization, improving the consistency between model training and search goals.
  • Helps mitigate common failure modes in learning-based optimization such as premature convergence and misdirected generation.

⚡ Scalable and Parallel-Friendly Search

  • Naturally supports large-scale parallel candidate generation and evaluation.
  • Well-suited for modern high-throughput settings such as GPU-accelerated simulation-based optimization.

🧪 Strong Performance across Diverse Black-Box Benchmarks

  • Validated on numerical benchmarks, classical control tasks, and high-dimensional robotic control environments.
  • Demonstrates fast convergence and competitive wall-clock efficiency compared with representative baselines.

JAX Version

Installation

To set up the JAX environment, simply run the provided setup script:

bash setup_env_jax.sh

Example

Here is an example of how to use EvoGo with the JAX backend:

import jax
import jax.numpy as jnp

from evogo_jax.evogo import EvoGO
from test_functions.simple_functions_jax import Rosenbrock

if __name__ == "__main__":
    # 2. Configuration
    dim = 5
    num_parallel = 2
    seed = 42
    key = jax.random.PRNGKey(seed)

    # 3. Initialize the problem
    problem = Rosenbrock(dim=dim, key=key, parallels=num_parallel)

    # 4. Initialize the algorithm
    solver = EvoGO(
        max_iter=5,          
        batch_size=100,     
        gm_batch_size=100,  
        num_parallel=num_parallel, 
        debug=True           
    )

    # 5. Call the solve method
    print(f"Starting to solve {dim}-dimensional Rosenbrock function...")
    best_x, best_y = solver.solve(problem, dim=dim, seed=seed)

    print("\n" + "="*30)
    print(f"Optimization Complete!")
    print(f"Best x: {best_x}")
    print(f"Best y: {best_y}")
    print("="*30)

Torch Version

Installation

To set up the PyTorch environment, simply run the provided setup script:

bash setup_env_torch.sh

Example

Here is an example of how to use EvoGo with the PyTorch backend:

import torch
import numpy as np

from evogo_torch.evogo import EvoGO
from test_functions.simple_functions import HarderNumerical

if __name__ == "__main__":
    # Configuration
    dim = 5
    num_parallel = 2
    seed = 42
    device = "cuda:0" if torch.cuda.is_available() else "cpu"
    device_obj = torch.device(device)
    
    print(f"Running on {device}")

    # Initialize the problem
    problem = HarderNumerical(
        dim=dim, 
        device=device_obj, 
        eval_fn=HarderNumerical.Rosenbrock, 
        instances=num_parallel
    )

    # Initialize the algorithm
    solver = EvoGO(
        max_iter=5,
        batch_size=100,
        gm_batch_size=100,
        num_parallel=num_parallel,
        use_gp=True, 
        debug=True,
        gpu_id=0 if "cuda" in device else -1
    )

    # Call the solve method
    print(f"Starting to solve {dim}-dimensional Rosenbrock function (HarderNumerical)...")
    
    best_x, best_y = solver.solve(problem, dim=dim, seed=seed, device=device)

    print("\n" + "="*30)
    print(f"Optimization Complete!")
    print(f"Best x (normalized input to solver): {best_x}")
    print(f"Best y: {best_y}")
    print("="*30)

🤝 Community & Support

We welcome contributions and look forward to your feedback!

  • Engage in discussions and share your experiences on GitHub Issues.
  • Join our QQ group (ID: 297969717).

📖 Citing EvoGO

@article{sun2025evolutionary,
  title     = {Evolutionary Generative Optimization: Towards Fully Data-Driven Evolutionary Optimization via Generative Learning},
  author    = {Sun, Kebin and Jiang, Tao and Cheng, Ran and Jin, Yaochu and Tan, Kay Chen},
  journal   = {arXiv preprint arXiv:2508.00380},
  year      = {2025}
}

About

Evolutionary Generative Optimization(EvoGO) is a fully data-driven framework for black-box optimization.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •