Skip to content
Machine Learning in R
R Other
  1. R 99.4%
  2. Other 0.6%
Branch: master
Clone or download

Latest commit


Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update tic CI setup (#2762) May 27, 2020
R set ` = FALSE` in `type.convert()` (#2764) May 28, 2020
addon new cheatsheet location Apr 1, 2020
data Base: Add createSpatialResamplingPlots() (#2373) Jul 17, 2018
inst Update master to latest CRAN version, travis and tic tweaks (#2429) Sep 13, 2018
man-roxygen Change argument default for `benchmark(models = )` and add `kee… (#2599) Aug 5, 2019
man Fix CI on R-devel and covr (#2750) Mar 31, 2020
pkgdown new cheatsheet location Apr 1, 2020
src Appveyor: Remove "package installation workaround" (#2464) Oct 24, 2018
tests Skip tests of failing learners temporarily (#2759) May 19, 2020
thirdparty Move from mkdocs to pkgdown and integrate mlr-tutorial into mlr (#2123) Mar 12, 2018
todo-files move regr_slim learner to todo-files (#2741) Feb 25, 2020
vignettes Skip tests of failing learners temporarily (#2759) May 19, 2020
.Rbuildignore Update tic CI setup (#2762) May 27, 2020
.editorconfig added editorconfig Sep 3, 2014
.gitignore add docs/ to .gitignore Mar 2, 2020
.ignore update .ignore file Jul 15, 2019
.pre-commit-config.yaml Build tutorial on Linux runner and skip some tests on CRAN (#2745) Mar 20, 2020
DESCRIPTION Bump version to May 20, 2020
LICENSE Update master to latest CRAN version, travis and tic tweaks (#2429) Sep 13, 2018
NAMESPACE Add support for R-devel (#2744) Mar 13, 2020 Bump version to May 20, 2020 update codecov badge Apr 17, 2020
mlr.Rproj run examples when testing locally Jan 13, 2020
tic.R GHA: only run pkgdown on one runner May 28, 2020


Package website: release | dev

Machine learning in R.

R CMD Check via {tic} CRAN_Status_Badge cran checks CRAN Downloads StackOverflow lifecycle codecov


{mlr} is considered retired from the mlr-org team. We won't add new features anymore and will only fix severe bugs. We suggest to use the new mlr3 framework from now on and for future projects.

Not all features of {mlr} are already implemented in {mlr3}. If you are missing a crucial feature, please open an issue in the respective mlr3 extension package and do not hesitate to follow-up on it.






Citing {mlr} in publications

Please cite our JMLR paper [bibtex].

Some parts of the package were created as part of other publications. If you use these parts, please cite the relevant work appropriately. An overview of all {mlr} related publications can be found here.


R does not define a standardized interface for its machine-learning algorithms. Therefore, for any non-trivial experiments, you need to write lengthy, tedious and error-prone wrappers to call the different algorithms and unify their respective output.

Additionally you need to implement infrastructure to

  • resample your models
  • optimize hyperparameters
  • select features
  • cope with pre- and post-processing of data and compare models in a statistically meaningful way.

As this becomes computationally expensive, you might want to parallelize your experiments as well. This often forces users to make crummy trade-offs in their experiments due to time constraints or lacking expert programming skills.

{mlr} provides this infrastructure so that you can focus on your experiments! The framework provides supervised methods like classification, regression and survival analysis along with their corresponding evaluation and optimization methods, as well as unsupervised methods like clustering. It is written in a way that you can extend it yourself or deviate from the implemented convenience methods and construct your own complex experiments or algorithms.

Furthermore, the package is nicely connected to the OpenML R package and its online platform, which aims at supporting collaborative machine learning online and allows to easily share datasets as well as machine learning tasks, algorithms and experiments in order to support reproducible research.


  • Clear S3 interface to R classification, regression, clustering and survival analysis methods
  • Abstract description of learners and tasks by properties
  • Convenience methods and generic building blocks for your machine learning experiments
  • Resampling methods like bootstrapping, cross-validation and subsampling
  • Extensive visualizations (e.g. ROC curves, predictions and partial predictions)
  • Simplified benchmarking across data sets and learners
  • Easy hyperparameter tuning using different optimization strategies, including potent configurators like
    • iterated F-racing (irace)
    • sequential model-based optimization
  • Variable selection with filters and wrappers
  • Nested resampling of models with tuning and feature selection
  • Cost-sensitive learning, threshold tuning and imbalance correction
  • Wrapper mechanism to extend learner functionality in complex ways
  • Possibility to combine different processing steps to a complex data mining chain that can be jointly optimized
  • OpenML connector for the Open Machine Learning server
  • Built-in parallelization
  • Detailed tutorial


Simple usage questions are better suited at Stackoverflow using the mlr tag.

Please note that all of us work in academia and put a lot of work into this project - simply because we like it, not because we are paid for it.

New development efforts should go into {mlr3}. We have a own style guide which can easily applied by using the mlr_style from the styler package. See our wiki for more information.

Talks, Workshops, etc.

mlr-outreach holds all outreach activities related to {mlr} and {mlr3}.

You can’t perform that action at this time.