Skip to content

Releases: heal-research/pyoperon

pyoperon-0.4.0

13 Jun 15:37
16f71f1
Compare
Choose a tag to compare

What's Changed

This release is based on Operon rev. 4a93f98

  • minor bugfix related to lexicographical sorting in NSGA2
  • best order sort (DOI) implementation, now Operon contains all well-known non-dominated sorting algorithms
  • refactored dispatch table using generic backend interface (based on mdspan), support for other math backends (Blaze, Eve, etc.)
  • improved likelihoods (Gaussian, Poisson) which can also be used as objective functions
  • many other small improvements and fixes
  • support for SGD and L-BFGS algorithms for parameter tuning

The scikit-learn interface has been updated with some fixes and additional parameters:

  • local_iterations parameter has been renamed to optimizer_iterations
  • optimizer parameter accepts lm, sgd or lbfgs values to choose the optimization method
  • optimizer_likelihood parameter specifies the likelihood used by the optimizer
  • optimizer_batch_size controls the batch size for gradient descent
  • local_search_probability controls the probability of applying local search to an individual
  • lamarckian_probability controls the probability of writing optimized coefficients back into the genotype
  • parameters add_model_scale_term and add_model_intercept_term control linear scaling of the final model
  • uncertainty parameter specifies the variance of the error (taken into account inside the likelihood)
  • sgd_update_rule, sgd_learning_rate, sgd_beta, sgd_beta2, sgd_epsilon can be used to configure the SGD algorithm
  • model_selection_criterion parameter can be used to specify which model from the final pareto front is returned (NSGA2)

pyoperon-0.3.6

31 Mar 20:54
a37d3b6
Compare
Choose a tag to compare

Changelog

This release is based on Operon rev. 88a15c3 and includes the following features:

  • hard-crafted reverse-mode automatic differentiation module for symbolic expression trees, with much better runtime performance
  • the ability to optimize all tree node coefficients via nonlinear least squares (previously, only leaf nodes were possible)
  • slightly faster interpreter performance (+5-10%)
  • a selection of new evaluators
    • AggregateEvaluator: aggregates multiple objectives into a single scalar (min, max, median, mean, harmonic mean, sum)
    • BayesianInformationCriterionEvaluator: computes the value of the Bayesian Information Criterion (BIC) for a symbolic regression model
    • AkaikeInformationCriterionEvaluator: computes the value of the Akaike Information Criterion (AIC) for a symbolic regression model
    • MinimumDescriptionLengthEvaluator: computes the Minimum Description Length (MDL) of a symbolic regression model
  • various other fixes and improvements

The scikit-learn module now defaults to using the minimum description length to select the best model from the Pareto front. This is configurable with choices between: MSE, BIC, AIC, MDL