A large, curated repository of benchmark datasets for evaluating supervised machine learning algorithms.
Switch branches/tags
Nothing to show
Clone or download
rhiever Merge pull request #18 from weixuanfu/patch-1
Update README.md in regression benchmarks
Latest commit ec90219 Feb 13, 2018


Penn Machine Learning Benchmarks

This repository contains the code and data for a large, curated set of benchmark datasets for evaluating and comparing supervised machine learning algorithms. These data sets cover a broad range of applications, and include binary/multi-class classification problems and regression problems, as well as combinations of categorical, ordinal, and continuous features. There are no missing values in these data sets.

Check the datasets directory for information about the individual data sets. All binary and multiclass classification datasets are in the classification sub-directory, and all regression datasets are in the regression sub-directory.

Data set format

All data sets are stored in a common format:

  • First row is the column names
  • Each following row corresponds to one row of the data
  • The target column is named target
  • All columns are tab (\t) separated
  • All files are compressed with gzip to conserve space

Python wrapper

For easy access to the benchmark data sets, we have provided a Python wrapper named pmlb. The wrapper can be installed on Python via pip:

pip install pmlb

and used in Python scripts as follows:

from pmlb import fetch_data

# Returns a pandas DataFrame
adult_data = fetch_data('adult')

The fetch_data function has two additional parameters:

  • return_X_y (True/False): Whether to return the data in scikit-learn format, with the features and labels stored in separate NumPy arrays.
  • local_cache_dir (string): The directory on your local machine to store the data files so you don't have to fetch them over the web again. By default, the wrapper does not use a local cache directory.

For example:

from pmlb import fetch_data

# Returns NumPy arrays
adult_X, adult_y = fetch_data('adult', return_X_y=True, local_cache_dir='./')

You can also list all of the available data sets as follows:

from pmlb import dataset_names


Or if you only want a list of available classification or regression datasets:

from pmlb import classification_dataset_names, regression_dataset_names


Example usage: Compare two classification algorithms with PMLB

PMLB is designed to make it easy to benchmark machine learning algorithms against each other. Below is a Python code snippet showing the most basic way to use PMLB to compare two algorithms.

from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split

import matplotlib.pyplot as plt
import seaborn as sb

from pmlb import fetch_data, classification_dataset_names

logit_test_scores = []
gnb_test_scores = []

for classification_dataset in classification_dataset_names:
    X, y = fetch_data(classification_dataset, return_X_y=True)
    train_X, test_X, train_y, test_y = train_test_split(X, y)

    logit = LogisticRegression()
    gnb = GaussianNB()

    logit.fit(train_X, train_y)
    gnb.fit(train_X, train_y)

    logit_test_scores.append(logit.score(test_X, test_y))
    gnb_test_scores.append(gnb.score(test_X, test_y))

sb.boxplot(data=[logit_test_scores, gnb_test_scores], notch=True)
plt.xticks([0, 1], ['LogisticRegression', 'GaussianNB'])
plt.ylabel('Test Accuracy')

Citing PMLB

If you use PMLB in a scientific publication, please consider citing the following paper:

Randal S. Olson, William La Cava, Patryk Orzechowski, Ryan J. Urbanowicz, and Jason H. Moore (2017). PMLB: a large benchmark suite for machine learning evaluation and comparison. BioData Mining 10, page 36.

BibTeX entry:

    author="Olson, Randal S. and La Cava, William and Orzechowski, Patryk and Urbanowicz, Ryan J. and Moore, Jason H.",
    title="PMLB: a large benchmark suite for machine learning evaluation and comparison",
    journal="BioData Mining",

Support for PMLB

PMLB was developed in the Computational Genetics Lab at the University of Pennsylvania with funding from the NIH under grant R01 AI117694. We are incredibly grateful for the support of the NIH and the University of Pennsylvania during the development of this project.