Skip to content
master
Switch branches/tags
Go to file
Code

README.md

Visually29K: a large-scale curated infographics dataset

This code is associated with the following project page: http://visdata.mit.edu/

In this repo, we provide metadata and annotations for thousands of infographics, for various computer vision and natural language tasks. We used this data in the reports: https://arxiv.org/pdf/1807.10441 and https://arxiv.org/pdf/1709.09215.

To learn how to use the data: howto.ipynb

If you use the data or code in this git repo, please consider citing:

@inproceedings{visually2,
    author    = {Spandan Madan*, Zoya Bylinskii*, Matthew Tancik*, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand}
    title     = {Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics},
    booktitle = {arXiv preprint arXiv:1807.10441},
    url       = {https://arxiv.org/pdf/1807.10441},
    year      = {2018}
}
@inproceedings{visually1,
    author    = {Zoya Bylinskii*, Sami Alsheikh*, Spandan Madan*, Adria Recasens*, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva}
    title     = {Understanding infographics through textual and visual tag prediction},
    booktitle = {arXiv preprint arXiv:1709.09215},
    url       = {https://arxiv.org/pdf/1709.09215},
    year      = {2017}
}

About

A large-scale infographics dataset from Visual.ly with metadata and additional crowdsourced annotations

Resources

License

Releases

No releases published

Packages

No packages published