Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Local News Dataset 2018


By Leon Yin
On 2018-08-14


This dataset is a machine-readible directory of state-level newspapers, tv stations and magazines. In addition to basic information such as the name of the outlet and state it is located in, all available information regarding web presence, social media (twitter, youtube, facebook) and their owners is scraped, too.

The sources of this dataset are newspapers and magazines by state, -- tv stations by state and by owner, and homepages of the media corporations Meredith, Sinclair, Nexstar, Tribune and Hearst.

This dataset was inspired by ProPublica's Congress API. I hope that this dataset will serve a similar purpose as a starting point for research and applications, as well as a bridge between datasets from social media, news articles and online communities.

While you use this dataset, if you see irregularities, questionable entries, or missing outlets please submit an issue on Github or contact me on Twitter. I'd love to hear how this dataset is put to work

Happy hunting

For an indepth introduction, specs, data sheet, and quickstart check out this Jupyter Notebook in nbs/local_news_dataset.ipynb.

What's the data look like?

name state website domain twitter youtube facebook owner medium source collection_date
0 KWHE HI NaN NaN NaN LeSea TV station stationindex 2018-08-02 14:55:24.612585
1 WGVK MI NaN NaN NaN Grand Valley State University TV station stationindex 2018-08-02 14:55:24.612585
2 KNIC-CD TX NaN NaN NaN NaN NaN Univision TV station stationindex 2018-08-02 14:55:24.612585

You can also browse the dataset on Google Sheets
Or look at the raw dataset on Github
Or just browse the Jupyter Notebook's tech specs.

How is this Repo Organized?

The nbs directory has exmaples of how to use this dataset. The dataset was created in Python. The scripts to re-create and update the dataset are in the py directory.. In addition to the state and name of each media outlet, I also collect their web domain and social (Twitter, Facebook, Youtube) IDs where available.


Several websites are scraped using the requests and beautifulsoup Python packages. The column names are then normalized, and merged.


There can be several entires with the same domain.
Why? Certain city-level publications are subdomains of larger state-level sites. There is a preprocessed version for domain-level analysis here:

Using the Dataset

The dataset can be downloaded from the raw GitHub file using the website, or from the commandline:


The dataset can also be loaded directly into a Pandas DataFrame.

import pandas as pd

url = ''
df_local_news = pd.read_csv(url)


I'd like to acknowledge the work of the people behind and for compiling lists of local media outlets. Andreu Casas and Gregory Eady provided invaluable comments to improve this dataset for public release. Leon Yin is a member of the SMaPP Lab at NYU. Thank you Josh Tucker, Jonathan Nagler, Richard Bonneau and my collegue Nicole Baram.


If this dataset is helpful to you please cite it as:

  author       = {Leon Yin},
  title        = {Local News Dataset},
  month        = aug,
  year         = 2018,
  doi          = {10.5281/zenodo.1345145},
  url          = {}


The documentation and scripts for the Local News Dataset







No packages published