Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Build Status codecov Documentation Status

experitur automates machine learning and other computer science experiments and stores the results in an easily accessible format. It includes grid search, random search, parameter substitution, inheritance and resuming aborted experiments.


Read the documentation!

Experiment description

Every experiment is described in a regular python file. The @Experiment decorator is used to mark experiment entry-points. By default, parameters are defined as a parameter grid where each parameter has a list of values that it can take. A number of trials is generated from the cross product of the values of each parameter. (So it works like sklearn.model_selection.ParameterGrid.)

An experiment is a regular function that is decorated with @Experiment (unless it is abstract or derived). Upon execution, this function gets called with the current trial's parameters. It may return a result dictionary.

Signature: (trial: Trial) -> Optional[dict]

from experitur import Experiment, Trial

        "parameter_1": [1,2,3],
        "parameter_2": ["a", "b", "c"],
def example_experiment(trial: Trial):
    """This is an example experiment."""
    print("parameter_1:", trial["parameter_1"])
    print("parameter_2:", trial["parameter_2"])
    return {}

You can run the experiment using experitur run and example_experiment will be called six times with every combination of [1,2] x [a,b,c].

Multiple experiments

The Python file can contain multiple experiments:

from experitur import Experiment, Trial

def example1(trial: Trial):
def example2(trial: Trial):

Experiment inheritance

One experiment may inherit the settings of another, using the parent parameter.

from experitur import experiment

def example1(trial):
# Derived with own entry point:
def example2(trial):
# Derived with inherited entry point:
example3 = experiment("example3", parent=example2)

The trial object

Every experiment receives a Trial instance that allows access to the parameters and meta-data of the trial.

Parameters are accessed with the [] operator (e.g. trial["a"]), meta-data is accessed with the . operator (e.g. trial.wdir).


When experitur executes a script, it creates the following file structure in the directory where the experiment file is located:

+- script/
|  +- experiment_id/
|  |  +- trial_id/
|  |  |  +- experitur.yaml
|  |  ...
|  ...

<script>/<experiment_id>/<trial_id>/experitur.yaml contains the parameters and the results from a trial, e.g.:

  func: simple.simple
  meta: null
  name: simple
  parent: null
id: simple/a-1_b-3
  a: 1
  b: 3
result: null
success: true
time_end: 2020-03-26 21:01:51.648282
time_start: 2020-03-26 21:01:51.147210
wdir: simple/simple/a-1_b-3

Most items should be self-explanatory. parameters are the parameters passed to the entry point. id is derived from the parameters that are varied in the parameter grid. This way, you can easily interpret the file structure.


experitur is packaged on PyPI.

pip install experitur

Be warned that this package is currently under heavy development and anything might change any time!

To install the development version, do:

pip install -U git+


  • examples/ A very basic example showing the workings of set_default_parameters and apply_parameters.
  • examples/ Try different parameters of sklearn.svm.SVC to classify handwritten digits (the MNIST test set). Run the example, add more parameter values and see how experitur skips already existing configurations during the next run.


experitur is under active development, so any user feedback, bug reports, comments, suggestions, or pull requests are highly appreciated. Please use the bug tracker and fork the repository.


experitur is tested with Python 3.6, 3.7 and 3.8.


Automates machine learning and other computer experiments. Includes grid search and resuming aborted experiments.





No packages published