Skip to content

Latest commit

 

History

History
98 lines (69 loc) · 3.74 KB

index.rst

File metadata and controls

98 lines (69 loc) · 3.74 KB

Welcome to deforce's documentation!

https://pepy.tech/badge/deforce https://readthedocs.org/projects/deforce/badge/?version=latest https://img.shields.io/badge/Chat-on%20Telegram-blue https://img.shields.io/badge/PR-Welcome-%23FF8300.svg?

deforce (Metaheuristic-optimized Multi-Layer Perceptron) is a Python library that implements variants and the traditional version of Multi-Layer Perceptron models. These include Metaheuristic-optimized MLP models (GA, PSO, WOA, TLO, DE, ...) and Gradient Descent-optimized MLP models (SGD, Adam, Adelta, Adagrad, ...). It provides a comprehensive list of optimizers for training MLP models and is also compatible with the Scikit-Learn library. With deforce, you can perform searches and hyperparameter tuning using the features provided by the Scikit-Learn library.

  • Free software: GNU General Public License (GPL) V3 license
  • Provided Estimator: CfnRegressor, CfnClassifier, DfoCfnRegressor, DfoCfnClassifier
  • Total Metaheuristic-based MLP Regressor: > 200 Models
  • Total Metaheuristic-based MLP Classifier: > 200 Models
  • Total Gradient Descent-based MLP Regressor: 12 Models
  • Total Gradient Descent-based MLP Classifier: 12 Models
  • Supported performance metrics: >= 67 (47 regressions and 20 classifications)
  • Supported objective functions (as fitness functions or loss functions): >= 67 (47 regressions and 20 classifications)
  • Documentation: https://deforce.readthedocs.io
  • Python versions: >= 3.8.x
  • Dependencies: numpy, scipy, scikit-learn, pandas, mealpy, permetrics, torch, skorch
.. toctree::
   :maxdepth: 4
   :caption: Quick Start:

   pages/quick_start.rst

.. toctree::
   :maxdepth: 4
   :caption: Models API:

   pages/deforce.rst

.. toctree::
   :maxdepth: 4
   :caption: Support:

   pages/support.rst



Indices and tables