EC-KitY is a Python tool kit for doing evolutionary computation, and it is scikit-learn compatible.
Currently we have implemented Genetic Algorithm (GA) and tree-based Genetic Programming (GP), but EC-KitY will grow!
EC-KitY is:
- A comprehensive toolkit for running evolutionary algorithms
- Written in Python
- Can work with or without scikit-learn, i.e., supports both sklearn and non-sklearn modes
- Designed with modern software engineering in mind
- Designed to support all popular EC paradigms (GA, GP, ES, coevolution, multi-objective, etc').
The minimal Python Version for EC-KitY is Python 3.8
The dependencies of our package are described in requirements.txt
For sklearn mode, EC-KitY additionally requires:
- scikit-learn (>=1.1)
pip install eckity
API is available here
(Work in progress - some modules and functions are not documented yet.)
The tutorials are available here, walking you through running EC-KitY both in sklearn mode and in non-sklearn mode.
More examples are in the examples folder.
All you need to do is define a fitness-evaluation method, through a SimpleIndividualEvaluator
sub-class.
You can run the examples with ease by opening this colab notebook.
You can run an EA with just 3 lines of code. The problem being solved herein is simple symbolic regression.
Additional information on this problem can be found in the Symbolic Regression Tutorial.
from eckity.algorithms.simple_evolution import SimpleEvolution
from eckity.subpopulation import Subpopulation
from examples.treegp.non_sklearn_mode.symbolic_regression.sym_reg_evaluator import SymbolicRegressionEvaluator
algo = SimpleEvolution(Subpopulation(SymbolicRegressionEvaluator()))
algo.evolve()
print(f'algo.execute(x=2,y=3,z=4): {algo.execute(x=2, y=3, z=4)}')
The problem being solved herein is the same problem, but in this case we also involve sklearn compatability - a core feature of EC-KitY. Additional information for this example can be found in the Sklearn Symbolic Regression Tutorial.
A simple sklearn-compatible EA run:
from sklearn.datasets import make_regression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from eckity.algorithms.simple_evolution import SimpleEvolution
from eckity.creators.gp_creators.full import FullCreator
from eckity.genetic_encodings.gp.tree.utils import create_terminal_set
from eckity.sklearn_compatible.regression_evaluator import RegressionEvaluator
from eckity.sklearn_compatible.sk_regressor import SKRegressor
from eckity.subpopulation import Subpopulation
X, y = make_regression(n_samples=100, n_features=3)
terminal_set = create_terminal_set(X)
algo = SimpleEvolution(Subpopulation(creators=FullCreator(terminal_set=terminal_set),
evaluator=RegressionEvaluator()))
regressor = SKRegressor(algo)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
regressor.fit(X_train, y_train)
print('MAE on test set:', mean_absolute_error(y_test, regressor.predict(X_test)))
Here's a comparison table. The full paper is available here.
Moshe Sipper, Achiya Elyasaf, Itai Tzruia, Tomer Halperin
Citations are always appreciated 😊:
@article{eckity2023,
author = {Moshe Sipper and Tomer Halperin and Itai Tzruia and Achiya Elyasaf},
title = {{EC-KitY}: Evolutionary computation tool kit in {Python} with seamless machine learning integration},
journal = {SoftwareX},
volume = {22},
pages = {101381},
year = {2023},
url = {https://www.sciencedirect.com/science/article/pii/S2352711023000778},
}
@misc{eckity2022git,
author = {Sipper, Moshe and Halperin, Tomer and Tzruia, Itai and Elyasaf, Achiya},
title = {{EC-KitY}: Evolutionary Computation Tool Kit in {Python}},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://www.eckity.org/} }
}