Skip to content

gykovacs/common_datasets

Repository files navigation

CircleCI_ GitHub_ Codecov_ ReadTheDocs_ _ _ PyPi_

common-datasets: common machine learning datasets

This package provides an unofficial collection of datasets widely used in the evaluation of machine learning techniques, mainly small and imbalanced datasets for binary, multiclass classification and regression. The datasets are provided in the usual sklearn.datasets format, with missing data imputation and the encoding of category and ordinal features. The authors of this repository do not own any licenses for the datasets, the goal of the project is to provide a stanardized collection of datasets for research purposes.

PLEASE DO NOT CITE OR REFER TO THIS PACKAGE IN ANY FORM!

If you use data through this repository, please cite the original works publishing and specifying these datasets:

@article{keel,
  author={Alcala-Fdez, J. and Fernandez, A. and Luengo, J. and Derrac, J. and Garcia, S.
          and Sanchez, L. and Herrera, F.},
  title={KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms
          and Experimental Analysis Framework},
  journal={Journal of Multiple-Valued Logic and Soft Computing},
  volume={17},
  number={2-3},
  year={2011},
  pages={255-287}}

@misc{uci,
  author = "Dua, Dheeru and Karra Taniskidou, Efi",
  year = "2017",
  title = "{UCI} Machine Learning Repository",
  url = "http://archive.ics.uci.edu/ml",
  institution = "University of California, Irvine, School of Information and Computer Sciences"}

@article{krnn,
  author={X. J. Zhang and Z. Tari and M. Cheriet},
  title={{KRNN}: k {Rare-class Nearest Neighbor} classification},
  journal={Pattern Recognition},
  year={2017},
  volume={62},
  number={2},
  pages={33--44}
  }

For each individual dataset the citation key referring to its publisher or a relevant publication in which the dataset in the given configuration has been used is provided as part of the dataset. For example:

# binary classification
>> import common_datasets.binary_classification as binclas

>> dataset = bin_clas.load_abalone19()
>> dataset['citation_key']
'keel'

Introduction

The package contains 119 binary classification, 23 multiclass classification and 23 regression datasets.

Installation

The package can be cloned from GitHub in the usual way, and the latest stable version is also available in the PyPI repository:

pip install common_datasets

Use cases

Loading a dataset

# binary classification
import common_datasets.binary_classification as binclas

dataset = binclas.load_abalone19()

# multiclass classification
import common_datasets.multiclass_classification as multclas

dataset = multclas.load_abalone()

# regression
from common_datasets import regression

dataset = regression.load_treasury()

Querying all dataset loaders and loading a dataset

# binary classification
import common_datasets.binary_classification as binclas

data_loaders = binclas.get_data_loaders()

dataset_0 = data_loaders[0]()

# multiclass classification
import common_datasets.multiclass_classification as multclas

data_loaders = multclas.get_data_loaders()

dataset_0 = data_loaders[0]()

# regression
from common_datasets import regression

data_loaders = regression.get_data_loaders()

dataset_0 = data_loaders[0]()

Querying the loaders of the 5 smallest datasets regarding the total number of records

# binary classification
import common_datasets.binary_classification as binclas

data_loaders = binclas.get_filtered_data_loaders(n_smallest=5, sorting='n')

dataset_0 = data_loaders[0]()

# multiclass classification
import common_datasets.multiclass_classification as multclas

data_loaders = multclas.get_data_loaders(n_smallest=5, sorting='n')

dataset_0 = data_loaders[0]()

# regression
from common_datasets import regression

data_loaders = regression.get_data_loaders(n_smallest=5, sorting='n')

dataset_0 = data_loaders[0]()

Documentation