Skip to content

thieu1995/pfevaluator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pfevaluator: A library for evaluating performance metrics of Pareto fronts in multiple/many objective optimization problems

GitHub release Wheel PyPI version PyPI - Python Version PyPI - Status PyPI - Downloads Downloads GitHub Release Date Documentation Status Chat GitHub contributors GitTutorial DOI License


"Knowledge is power, sharing it is the premise of progress in life. It seems like a burden to someone, but it is the only way to achieve immortality." --- Thieu Nguyen


Introduction

Dependencies

  • Python (>= 3.6)
  • Numpy (>= 1.18.1)
  • pygmo (>= 2.13.0)

User installation

Install the current PyPI release:

pip install pfevaluator     

Or install the development version from GitHub:

pip install git+https://github.com/thieu1995/pfevaluator

Pareto front Performance Metrics

Closeness: Metrics Measuring the Closeness of the Solutions to the True Pareto Front
  1. GD: Generational Distance
  2. IGD: Inverted Generational Distance
  3. MPFE: Maximum Pareto Front Error
Closeness - Diversity: Metrics Measuring the Closeness of the Solutions to the True Pareto Front
  1. HV: Hyper Volume (Using Different Library)
  2. HAR: Hyper Area Ratio (Using Different Library)
Distribution: Metrics Focusing on Distribution of the Solutions
  1. UD: Uniform Distribution
  2. S: Spacing
  3. STE: Spacing To Extend
  4. NDC: Number of Distinct Choices (Not Implemented Yet)
Ratio: Metrics Assessing the Number of Pareto Optimal Solutions in the Set
  1. RNI: Ratio of Non-dominated Individuals
  2. ER: Error Ratio
  3. ONVG: Overall Non-dominated Vector Generation
  4. PDI: Pareto Dominance Indicator (Not Implemented Yet)
Spread: Metrics Concerning Spread of the Solutions
  1. MS: Maximum Spread

Examples


+ front: the file contains class Metric for evaluating all posible solution (population of obtained fronts).
+ pfront (Pareto front): the file contains class Metric for evaluating the obtained front from each test case.
+ tpfront: (True pareto front): the file contains class Metric for evaluating the obtained front and True pareto front
 (Reference front). Means, you need to pass the Reference front in this class.

+ True pareto front (Reference front) can be obtained by:
    1) You provide it (If you know the True Pareto front for your problem)
    2) Calculate from all possible fronts obtained from all test case.
        + Assumption you have N1 algorithms to test. 
        + Each algorithm give you a Obtained front. 
        + Each algorithm you run N2 independent trials --> Number of all possible fronts: N1 * N2 
        + Pass all N1*N2 front in our function to calculate the Non-donminated Solutions (Reference front
 - Approximate Pareto front - True Pareto front)


import pfevaluator

## Some avaiable performance metrics for evaluate each type of Pareto front.
pfront_metrics = ["UD", "NDC"]
tpfront_metrics = ["ER", "ONVG", "MS", "GD", "IDG", "MPFE", "S", "STE"]
volume_metrics = ["HV", "HAR"]

pm = pfevaluator.metric_pfront(obtained_front, pfront_metrics)              # Evaluate for each algorithm in each trial
tm = pfevaluator.metric_tpfront(obtained_front, reference_front, tpfront_metrics)        # Same above
vm = pfevaluator.metric_volume(obtained_front, reference_front, volume_metrics, None, all_fronts=matrix_fitness)

## obtained_front: is your front you found in each test case (each trial of each algorithm)
## reference_front (True Pareto front): is your True Pareto front of your problem.
##      If you don't know your True Pareto front, do the above step to get it from population of obtained fronts.
##      Using this function: reference_front = pfevaluator.find_reference_front(matrix_fitness)
##          matrix_fitness is all of your fronts in all test cases.

## The results is dict such as:     pm = { "UD": 0.2, "NDC": 0.1 } 

  • The full test case in the file: examples/full.py

Important links

Contributions

Citation

  • If you use pfevaluator in your project, please cite my works:
@article{nguyen2019efficient,
  title={Efficient Time-Series Forecasting Using Neural Network and Opposition-Based Coral Reefs Optimization},
  author={Nguyen, Thieu and Nguyen, Tu and Nguyen, Binh Minh and Nguyen, Giang},
  journal={International Journal of Computational Intelligence Systems},
  volume={12},
  number={2},
  pages={1144--1161},
  year={2019},
  publisher={Atlantis Press}
}

Documents:

  1. Yen, G. G., & He, Z. (2013). Performance metric ensemble for multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 18(1), 131-144.
  2. Panagant, N., Pholdee, N., Bureerat, S., Yildiz, A. R., & Mirjalili, S. (2021). A Comparative Study of Recent Multi-objective Metaheuristics for Solving Constrained Truss Optimisation Problems. Archives of Computational Methods in Engineering, 1-17.
  3. Knowles, J., & Corne, D. (2002, May). On metrics for comparing nondominated sets. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600) (Vol. 1, pp. 711-716). IEEE.
  4. Yen, G. G., & He, Z. (2013). Performance metric ensemble for multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 18(1), 131-144.
  5. Guerreiro, A. P., Fonseca, C. M., & Paquete, L. (2020). The hypervolume indicator: Problems and algorithms. arXiv preprint arXiv:2005.00515.