DistilKaggle is a curated dataset extracted from Kaggle Jupyter notebooks spanning from September 2015 to October 2023. This dataset is a distilled version derived from the download of over 300GB of Kaggle kernels, focusing on essential data for research purposes. The dataset exclusively comprises publicly available Python Jupyter notebooks from Kaggle. The essential information for retrieving the data needed to download the dataset is obtained from the MetaKaggle dataset provided by Kaggle
The DistilKaggle dataset consists of three main CSV files:
-
code.csv: Contains over 12 million rows of code cells extracted from the Kaggle kernels. Each row is identified by the kernel's ID and cell index for reproducibility.
-
markdown.csv: Includes over 5 million rows of markdown cells extracted from Kaggle kernels. Similar to code.csv, each row is identified by the kernel's ID and cell index.
-
notebook_metrics.csv: This file provides notebook features described in the accompanying paper released with this dataset. It includes metrics for over 517,000 Python notebooks.
The kernels directory is organized based on Kaggle's Performance Tiers (PTs), a ranking system in Kaggle that classifies users. The structure includes PT-specific directories, each containing user ids that belong to this PT, download logs, and the essential data needed for downloading the notebooks.
The utility directory contains two important files:
-
aggregate_data.py: A Python script for aggregating data from different PTs into the mentioned CSV files.
-
application.ipynb: A Jupyter notebook serving as a simple example application using the metrics dataframe. It demonstrates predicting the PT of the author based on notebook metrics.
Researchers can leverage this distilled dataset for various analyses without dealing with the bulk of the original 300GB dataset. For access to the raw, unprocessed Kaggle kernels, researchers can request the dataset directly.
You can access the dataset from the link below:
Please note that the original dataset of Kaggle kernels is substantial, exceeding 300GB, making it impractical for direct upload to Zenodo. Researchers interested in the full dataset can contact the dataset maintainers for access.
If you use this dataset in your research, please cite the accompanying paper:
M. Mostafavi Ghahfarokhi, A. Asgari, M. Abolnejadian, and A. Heydarnoori. "DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks", In Proceedings of the 21st IEEE/ACM International Conference on Mining Software Repositories (MSR), Lisbon, Portugal, Apr. 2024.
@inproceedings{mostafavi-msr2024-DistilKaggle,
title={DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks},
booktitle={Proceedings of the 21st IEEE/ACM International Conference on Mining Software Repositories (MSR)},
author={Mojtaba Mostafavi Ghahfarokhi and Arash Asgari and Mohammad Abolnejadian and Abbas Heydarnoori},
month={April},
year={2024},
publisher={IEEE/ACM},
address={Lisbon, Portugal},
}
Thank you for using DistilKaggle!