Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions MANIFEST.in

This file was deleted.

2 changes: 1 addition & 1 deletion docs/Testing_Code_with_pytest.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Results files for dataloading tests
The dataloading tests rely on having some raw results data to load. And the results data should be various enough to test the various components of the data-loading code. In other words, effective testing requires a reasonable variety of input files. The repository does not contain sufficient results data for testing. A test set is available in a separate repository, [***TODO](***TODO). If [test_dataloading_by_ej.py](../tests/dataloading_tests/test_dataloading_by_ej.py) does not find results data, it will default to downloading the files from that repository.
The dataloading tests rely on having some raw results data to load. And the results data should be various enough to test the various components of the data-loading code. In other words, effective testing requires a reasonable variety of input files. The repository does not contain sufficient results data for testing. A test set is available in a separate repository, [TestingData](https://github.com/ElectionDataAnalysis/TestingData). If [test_dataloading_by_ej.py](../tests/dataloading_tests/test_dataloading_by_ej.py) does not find results data, it will default to downloading the files from that repository.

# Sample Testing Session

Expand Down
2 changes: 1 addition & 1 deletion docs/User_Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ analyzer = dl.analyzer
```

## Tabular Export
The Analyzer class has a number of functions that allow you to aggregate the data for analysis purposes. For example, running the `.top_counts()` function exports files into your rollup_dataframe directory which with counts summed up at a particular reporting unit level. This function expects 4 arguments: the election, the jurisdiction, the reporting unit level at which the aggregation will occur, and a boolean variable indicating whether you would like the data aggregated by vote count type. For example, to export all 2020 General results in your database to a tab-separated file `tabular_results.tsv`:
The Analyzer class has a number of functions that allow you to aggregate the data for analysis purposes. For example, to export all 2020 General results in your database to a tab-separated file `tabular_results.tsv`:
```
analyzer.export_election_to_tsv("tabular_results.tsv", "2020 General")
```
Expand Down
27 changes: 1 addition & 26 deletions src/electiondata/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2978,31 +2978,6 @@ def export_outlier_data(
)
return agg_results

def top_counts(
self, election: str, jurisdiction: str, sub_rutype: str, by_vote_type: bool
) -> Optional[str]:
"""
Inputs:
election: str,
jurisdiction: str,
sub_rutype: str, ReportingUnitType (e.g., 'county') to which the results should be rolled up
by_vote_type: bool, if true, results will be reported by vote type. If false, only totals will be reported

Puts file with results into a subdirectory (labeled by election and jurisdiction name)
of the reports_and_plots_dir specified in the Analyzer's param_file.
"""
jurisdiction_id = db.name_to_id(self.session, "ReportingUnit", jurisdiction)
election_id = db.name_to_id(self.session, "Election", election)
err = an.export_rollup(
self.session,
self.reports_and_plots_dir,
jurisdiction_id=jurisdiction_id,
sub_rutype=sub_rutype,
election_id=election_id,
by_vote_type=by_vote_type,
)
return err

def export_nist(
self,
election: str,
Expand Down Expand Up @@ -3829,7 +3804,7 @@ def external_data_exists(

connection = an.session.bind.raw_connection()
cursor = connection.cursor()
df = db.read_external(cursor, election_id, jurisdiction_id, ["Label"])
df = db.read_external_cursor(cursor, election_id, jurisdiction_id, ["Label"])
cursor.close()

# if no data found
Expand Down
Loading