Clojure Data Analysis Cookbook
Clojure Data Analysis Cookbook
Looking to use Clojure for data analysis?
Throughout the book, I use a number of datasets. Some of these are standard datasets, some are from the UCI Machine Learning Repository, some from census.ire.org, some from other sources, and some I've put together myself. I've uploaded them all here for archiving and easy access. Here they all are, with a few notes about each:
This data is downloaded from the Investigative Reporters and Editors Census dataset site. You can also download raw census data from the US Census Bureau.
- all_160.P3.csv: This is race data (P3) from the census. This is a place-level summary (160), and I've merged this data for all states.
- all_160_in_51.P3.csv: This is race data (P3) from the census. This is a place-level summary (160) for Virginia (51).
- all_160_in_51.P35.csv: This is family counts (P35) from the census. This is a place level summary (160) for Virginia (51).
- census-race.json: This is the data from
all_160.P3.csv, mentioned above, translated into JSON.
- clusters.json: This is a graph of the clusters of
the data from
all_160.P3.csv, mentioned above. The clusters were generated K-means clusters from that dataset aggregated by state. The JSON data structure represents the nodes and links (edges) in the graph, along with the aggregated data.
This dataset is from the UCI Machine Learning Repository. It contains sex, age, and measurements of abalone. This can be used to predict the age from the fish's physical measurements.
This dataset was selected and downloaded from the US National Highway Traffic Safety Administration. This dataset includes the speed limit and other factors related to the accidents.
This is from the Incanter datasets package. It's also found in the R datasets package.
- chick-weight.json: This is the Incanter dataset converted to JSON.
Currencies and Exchange Rates
This is a couple of datasets used to illustrate working with semantic web data and web scraping.
Doctor Who Companions
FASTA files are used in bioinformatics to exchange nucleotide and peptide sequences. This is a small collection of them to use for testing a custom FASTA parser.
IBM stock prices
This dataset was downloaded from Google Finance. It contains the prices of IBM stock for the decade between Nov 26, 2001 and Nov 23, 2012.
This dataset is from an antenna array in Labrador. It contains a number of measurements of free electrons in the ionosphere. This dataset can be found in the UCI Machine Learning Repository, but this dataset is in Attribute-Relation File Format (ARFF) format for use with Weka.
This is a standard dataset that's almost everywhere. We also use the copy that ships with Incanter several times in the book. For more information about this dataset, see its page at the UCI Machine Learning Repository.
This is another standard dataset from the UCI Machine Learning Repository. This contains categorical data on mushrooms, including whether they're edible or poisonous.
TV-Related Sample Datasets
These are a series of datasets I threw together to illustrate loading different data formats.
The Adventures of Sherlock Holmes
This text is from Project Gutenberg. It's a collection of Sherlock Holmes short stories written by Sir Arthur Conan Doyle.
Spelling Training Corpus
This is the training corpus used in Peter Norvig's article, "How to Write a Spelling Corrector."
World Bank dataset
I downloaded this dataset about income inequality from the World Bank. It need to be filtered and pivoted, and here is the final result.
This is a dataset on how much land is used for agriculture in China.
Delicious RSS Feed
This is a compressed subset of a delicious RSS feed scraping. I can't find the original online anywhere anymore, so I'm putting it here.
State of the Union dataset
This is a scraping of US Presidents' State of the Union (SOTU) addresses.
This is a compressed copy of data on US domestic flights from 1990–2009.