Tracking Progress in Rich Context
The Coleridge Initiative at NYU has been researching Rich Context to enhance search and discovery of datasets used in scientific research – see the Background Info section for more details. Partnering with experts throughout academia and industry, NYU-CI has worked to leverage the closely adjacent fields of NLP/NLU, knowledge graph, recommender systems, scholarly infrastructure, data mining from scientific literature, dataset discovery, linked data, open vocabularies, metadata management, data governance, and so on. Leaderboards are published here on GitHub to track state-of-the-art (SOTA) progress among the top results.
Entity Linking for Datasets in Publications
The first challenge is to identify the datasets used in research publications, initially focused on the problem of entity linking. Research papers generally mention the datasets they've used, although there are limited formal means to describe that metadata in a machine-readable way. The goal here is to predict a set of dataset IDs for each publication. The dataset IDs within the corpus represent the set of all possible datasets which will appear.
Identifying dataset mentions typically requires:
- extracting text from an open access PDF
- some NLP parsing of the text
- feature engineering (e.g., attention to where text is located in a paper)
- modeling to identify up to 5 datasets per publication
See Evaluating Models for Entity Linking with Datasets
for details about how the
Top5uptoD leaderboard metric is calculated.
|LARC @philipskokoh||0.7836||ipynb||repo||RCC_1||v0.1.5||2019-09-26||RCLC baseline experiment using RCC_1 approach|
|KAIST @HaritzPuerto||0.6319||ipynb||repo||RCC_1||v0.1.5||2019-11-01||model trained a different dataset using DocumentQA and Ultra-Fine Entity Typing -- NB: this approach is able to identify new datasets|
- How To Participate
- Corpus Description
- Download Resource Files
- Background Info
- Workflow Stages
- Glossary Terms
Use of open source and open standards are especially important to further the cause for effective, reproducible research. We're hosting this competition to focus on the research challenges of specific machine learning use cases encountered within Rich Context – see the Workflow Stages section.
If you have any questions about the Rich Context leaderboard competition – and especially if you identify any problems in the corpus (e.g., data quality, incorrect metadata, broken links, etc.) – please use the GitHub issues for this repo and pull requests to report, discuss, and resolve them.