Skip to content

opendataval/opendataval

Repository files navigation

OpenDataVal: a Unified Benchmark for Data Valuation

Assessing the quality of individual data points is critical for improving model performance and mitigating biases. However, there is no way to systematically benchmark different algorithms.

OpenDataVal is an open-source initiative that with a diverse array of datasets/models (image, NLP, and tabular), data valuation algorithms, and evaluation tasks using just a few lines of code.

OpenDataVal also provides a leaderboards for data evaluation tasks. We've curated and added artificial noise to some datasets. Create your own DataEvaluator to top the leaderboards. OpenDataVal is accepted at NeurIPS 2023 track on Datasets and Benchmarks.

Overview
Paper Paper link
Python Python Version
Dependencies Pytorch scikit-learn numpy Code style: black
Documentation Github Pages
CI/CD Build Coverage
Issues Issues
License MIT License
Releases Releases
Citation Cite Us

✨ Features


Feature Status Links Notes
Datasets Stable Docs Embeddings available for image/NLP datasets
Models Stable Docs Support available for sk-learn models
Data Evaluators Stable Docs
Experiments Stable Docs
Examples Stable
CLI Experimental opendataval --help No support for null values

(Back to top)

⏳ Installation options

It is highly reccomended to use a virtual environment for opendataval. Check out conda!

  1. Install with pip
    pip install opendataval
  2. Clone the repo and install
    git clone https://github.com/opendataval/opendataval.git
    make install
    a. Install optional dependencies if you're contributing
    make install-dev
    b. If you want to pull in kaggle datasets, I'd reccomend looking how to add a kaggle folder to the current directory. Tutorial here

(Back to top)

⚑ Quick Start


To set up an experiment on DataEvaluators. Feel free to change the source code as needed for a project.

import opendataval
from opendataval.experiment import ExperimentMediator
from opendataval.dataval import DataOob
from opendataval.experiment import discover_corrupted_sample, noisy_detection

exper_med = ExperimentMediator.model_factory_setup(
    dataset_name='iris',
    force_download=False,
    train_count=50,
    valid_count=50,
    test_count=50,
    model_name='ClassifierMLP',
    train_kwargs={'epochs': 5, 'batch_size': 20},
)
list_of_data_evaluators = [DataOob()]  # Define evaluators here
eval_med = exper_med.compute_data_values(list_of_data_evaluators)

# Runs a discover the noisy data experiment for each DataEvaluator and plots
data, fig = eval_med.plot(discover_corrupted_sample)

# Runs non-plottable experiment
data = eval_med.evaluate(noisy_detection)

πŸ’» CLI

opendataval comes with a quick CLI tool, The tool is under development and the template for a csv input is found at cli.csv. Note that for kwarg arguments, the input must be valid json.

To use run the following command if installed with make-install:

opendataval --file cli.csv -n [job_id] -o [path/to/output/]

To run without installing the script:

python opendataval --file cli.csv -n [job_id] -o [path/to/output/]

(Back to top)

πŸŽ›οΈ API

Here are the 4 interacting parts of opendataval

  1. DataFetcher, Loads data and holds meta data regarding splits
  2. Model, trainable prediction model.
  3. DataEvaluator, Measures the data values of input data point for a specified model.
  4. ExperimentMediator, facilitates experiments regarding data values across several DataEvaluators

(Back to top)

The DataFetcher takes the name of a Register dataset and loads, transforms, splits, and adds noise to the data set.

from opendataval.dataloader import DataFetcher

DataFetcher.datasets_available()  # ['dataset_name1', 'dataset_name2']
fetcher = DataFetcher(dataset_name='dataset_name1')

fetcher = fetcher.split_dataset_by_count(70, 20, 10)
fetcher = fetcher.noisify(mix_labels, noise_rate=.1)

x_train, y_train, x_valid, y_valid, x_test, y_test = fetcher.datapoints

(Back to top)

Model is the predictive model for Data Evaluators.

from opendataval.model import LogisticRegression

model = LogisticRegression(input_dim, output_dim)

model.fit(x, y)
model.predict(x)
>>> torch.Tensor(...)

(Back to top)

We have a catalog of DataEvaluator to run experiments. To do so, input the Model, DataFetcher, and an evaluation metric (such as accuracy).

from opendataval.dataval.ame import AME

dataval = (
    AME(num_models=8000)
    .train(fetcher=fetcher, pred_model=model, metric=metric)
)

data_values = dataval.data_values  # Cached values
data_values = dataval.evaluate_data_values()  # Recomputed values
>>> np.ndarray([.888, .132, ...])

(Back to top)

ExperimentMediator is helps make a cohesive and controlled experiment. NOTE Warnings are raised if errors occur in a specific DataEvaluator.

expermed = ExperimentrMediator(fetcher, model, train_kwargs, metric_name).compute_data_values(data_evaluators)

Run experiments by passing in an experiment function: (DataEvaluator, DataFetcher, ...) - > dict[str, Any]. There are 5 found exper_methods.py with three being plotable.

df = expermed.evaluate(noisy_detection)
df, figure = expermed.plot(discover_corrupted_sample)

For more examples, please refer to the Documentation

(Back to top)

πŸ… opendataval Leaderboards

For datasets that start with the prefix challenge, we provide leaderboards. Compute the data values with an ExperimentMediator and use the save_dataval function to save a csv. Upload it to here! Uploading will allow us to systematically compare your DataEvaluator against others in the field.

The available challenges are currently:

  1. challenge-iris
exper_med = ExperimentMediator.model_factory_setup(
    dataset_name='challenge-...', model_name=model_name, train_kwargs={...}, metric_name=metric_name
)
exper_med.compute_data_values([custom_data_evaluator]).evaluate(save_dataval, save_output=True)

(Back to top)

πŸ‘‹ Contributing

If you have a quick suggestion, reccomendation, bug-fixes please open an issue. If you want to contribute to the project, either through data sets, experiments, presets, or fix stuff, please see our Contribution page.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(Back to top)

πŸ’‘ Vision

  • clean, descriptive specification syntax -- based on modern object-oriented design principles for data science.
  • fair model assessment and benchmarking -- Easily build and evaluate your Data Evaluators
  • easily extensible -- Easily add your own data sets,

(Back to top)

πŸ›οΈ License

Distributed under the MIT License. See LICENSE.txt for more information.

(Back to top)

Cite Us

If you found the library or the paper useful, please cite us!

@article{
    jiang2023opendataval,
    title={OpenDataVal: a Unified Benchmark for Data Valuation},
    author={Kevin Fu Jiang and Weixin Liang and James Zou and Yongchan Kwon},
    booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2023},
    url={https://openreview.net/forum?id=eEK99egXeB}
}

(Back to top)