A Linear Partitioning Diversity Metric for Evaluation of Permutation-based Metaheuristic Algorithms
https://link.springer.com/article/10.1007/s12065-025-01035-9
- Majid Shahbazi
- Faramarz Safi-Esfahani
- Sara Shahbazi
- Seyedali Mirjalili
The LPDM Framework (Learning Phase Distribution Modeling) provides a hybrid experimentation environment designed to evaluate the performance of permutation-based metaheuristic algorithms using a novel diversity metric. This project supports both Python and C# implementations and includes empirical results, benchmark comparisons, and full methodology used in the related research paper:
📄 "A Linear Partitioning Diversity Metric for Evaluation of Permutation-based Metaheuristic Algorithms" (Shahbazi et al., 2025)
LPDM_Framework/
│
├── Codes-Python/ # Python implementations of LPDM and benchmarks
├── Codes-C#/ # C# implementations for LPDM simulation and experiments
├── Experiment Results/ # Collected metrics and diversity analysis (CSV, XLSX)
├── README.md # Project overview and documentation
├── LICENSE # MIT License
└── CITATION.cff # Citation information for this framework
- Dual-language support: Implementations in Python and C#
- Diversity-aware evaluation: Incorporates the LPDM metric for understanding solution diversity in permutation-based algorithms
- Comprehensive experimental design: Includes 18 experiment configurations across multiple strategies, operators, and benchmark functions
- Benchmark Integration: Ready to integrate with standard metaheuristics (e.g., GA, PSO, SA, TS, ACO, etc.)
- Extendable and modular: Easy to add new algorithms or diversity measures
- Statistical Output: CSV/XLSX reports and visualizations for comparative analysis
The LPDM Framework assesses diversity in metaheuristic search processes using a linear partitioning model over the solution space. The approach is broken into the following phases:
-
Search Space Encoding
Permutation-based solutions are encoded and grouped via partitioned diversity spaces. -
Diversity Metric Computation
A linear diversity score is computed by mapping each solution to predefined partitions based on linear position indices. -
Experimentation and Comparison
The framework runs controlled experiments across standard metaheuristics with varied configurations and captures convergence behavior and diversity over time. -
Evaluation
Performance is analyzed using metrics such as solution quality, LPDM diversity, and statistical spread across runs.
For full methodological details, please refer to the accompanying paper.
The Experiment Results/ folder contains the outcomes of 18 structured experiments:
- Variations of selection strategies and neighborhood operators
- Impact of LPDM on convergence dynamics
- Benchmarks: Job Shop Scheduling Problems, Traveling Salesman Problems
- Results are provided in
.csvand.xlsxformats with summaries of accuracy, diversity, and runtime.
To run the Python-based modules:
pip install -r requirements.txtThis project is licensed under the MIT License.
If you use this framework in your research, please cite the following:
@article{shahbazi2025linear,
author = {Shahbazi, M. and Safi-Esfahani, F. and Shahbazi, S., Mirjalili, S.},
title = {A linear partitioning diversity metric for evaluation of permutation-based metaheuristic algorithms},
journal = {Evolutionary Intelligence},
volume = {18},
pages = {69},
year = {2025},
doi = {10.1007/s12065-025-01035-9},
url = {https://doi.org/10.1007/s12065-025-01035-9}
}
You can also refer to the citation file: CITATION.cff
Contributions and extensions are welcome!
To report issues or propose enhancements, feel free to open an issue or fork the repository.
🧪 Explore the diversity, benchmark your algorithms, and improve your optimization strategies using LPDM!
