Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

federalRegister

An interface for collecting and parsing Federal Register documents

Installation

Clone the code from github.

The module requires a file named config.py in the root project directory. The config file must define a variable named dataDir which points to the root directory where the data will be saved.

Data collection

The module collects data from two sources:

  1. dataCollection/downloadMetadata.py downloads metadata describing Federal Register documents from the federalregister.gov API. Raw metadata is saved in annual zipped json files in dataDir/meta.

  2. dataCollection/downloadXML.py downloads the text of daily Federal Register documents from the govinfo.gov. Raw XML files are saved in dataDir/xml.

dataCollection/compileParsed.py builds parsed versions of the documents, where the XML is converted into Pandas data tables. These files are saved as pickled dataframes in dataDir/parsed. Files are named by document number, which must be extracted from the XML itself (and occasionally contains errors). The XML files sometimes contain duplicate printings of the same document, but each document only appears once in the parsed directory.

The complete dataset can be downloaded from scratch or updated to the latest available data by running update.py.

The complete dataset is approximately 20GB in size.

Loading data

Cleaned and processed data can be loaded through loaders.py. The most important functions are:

  • loadInfoDF loads all document metadata as a single dataframe
  • iterParsed iteratively loads available parsed documents
  • loadParsed loads a single parsed document

About

An interface for collecting and parsing Federal Register documents

Resources

License

Releases

No releases published

Packages

No packages published