Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data collection: campaign donors #290

Merged
merged 28 commits into from
Dec 1, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
435214d
Uploading script that packs all donation data into three datasets.
lacerdamarcelo Oct 12, 2017
a95288b
Almost finising the script (running tests).
lacerdamarcelo Oct 20, 2017
30b143b
Uploading script for generating donations data.
lacerdamarcelo Oct 21, 2017
a892167
Deleting ipynb.
lacerdamarcelo Oct 21, 2017
5786fe7
Fixing identation.
lacerdamarcelo Oct 21, 2017
7a4e198
Fixing readability issues, setting the correct paths for the generate…
lacerdamarcelo Oct 27, 2017
403907c
Fixing translations.
lacerdamarcelo Oct 27, 2017
aaacfe1
Improving code reuse.
lacerdamarcelo Oct 31, 2017
f4e01e3
Translating Tipo diretorio e Sequencial Diretorio.
lacerdamarcelo Nov 4, 2017
feec4fa
Fixing translations alphabetic order.
lacerdamarcelo Nov 4, 2017
b5b872b
Merge branch 'issue_275' of https://github.com/lacerdamarcelo/serenat…
cuducos Nov 19, 2017
4b669e1
Refactor download function
cuducos Nov 19, 2017
88a5141
Add wrapper to read CSV
cuducos Nov 19, 2017
fefa452
Refactor 2010 directory reader
cuducos Nov 19, 2017
5a28ab2
Refactor 2012-2016 directory readers
cuducos Nov 19, 2017
77ead0c
Use context manager to download, extract and cleanup
cuducos Nov 20, 2017
86d4c5d
Clean, normalize and translate column names
cuducos Nov 20, 2017
18d5bc5
Refactor main section
cuducos Nov 20, 2017
e92141e
Fix progress bar
cuducos Nov 20, 2017
52598c0
Fix class constants
cuducos Nov 20, 2017
0b6b8e6
Fix bugs: filename and glob patterns
cuducos Nov 20, 2017
190a24a
Fix file paths
cuducos Nov 20, 2017
895efc9
Merge pull request #1 from cuducos/cuducos-campaign-donations-data
lacerdamarcelo Nov 27, 2017
98b562e
Skip non-existent filename
cuducos Nov 29, 2017
e22e2c3
Merge pull request #2 from cuducos/cuducos-campaign-donations-data
lacerdamarcelo Nov 30, 2017
908def3
Fix: no bool value for pandas.DataFrame
cuducos Dec 1, 2017
7194fac
Document donation scripts and datasets
cuducos Dec 1, 2017
8e8d139
Merge pull request #3 from cuducos/cuducos-campaign-donations-data
lacerdamarcelo Dec 1, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 4 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,7 @@ fetch_latest_backup('data/')

##### Electoral information
1. `research/src/fetch_tse_data.py` downloads datasets files from TSE website and organize them in the dataset `research/data/YYYY-MM-DD-tse-candidates.xz`.
1. `research/src/fetch_campaign_donations.py` downloads datasets with data from donation to electoral campaigns (donations for candidates, committees and parties) in three files `research/data/YYYY-MM-DD-donations-candidates.xz`, `research/data/YYYY-MM-DD-donations-committees.xz` and `research/data/YYYY-MM-DD-donations-parties.xz`

##### Companies and Non-Profit Entities with sanctions (CEIS, CEPIM and CNEP).
1. `research/src/fetch_federal_sanctions.py` downloads all three datasets files (CEIS, CEPIM and CNEP) from official source. The script gets the lastest version available for each dataset, unpacks, translates columns to english and saves them into `research/data/`. The files are named as follows:
Expand Down Expand Up @@ -202,7 +203,9 @@ All files are named with a [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) da
1. `research/data/YYYY-MM-DD-congressperson-details.xz` contains the birth date, gender and civil name of congresspeople.
1. `research/data/YYYY-MM-DD-brazilian-cities.csv` contains information about all Brazilian cities (e.g. city code, state and name).
1. `research/data/YYYY-MM-DD-receipts-texts.xz` OCR of nearly 200k reimbursement receipts using Google's Cloud Vision API, for more information see the documentation on [docs/receipts-ocr.md](docs/receipts-ocr.md)

1. `research/data/YYYY-MM-DD-donations-candidates.xz` contais data about donations to candidates since the 2010 election
1. `research/data/YYYY-MM-DD-donations-committees.xz` contais data about donations to electoral committees since the 2010 election
1. `research/data/YYYY-MM-DD-donations-parties.xz` contais data about donations to political parties since the 2010 election

## Four moments

Expand Down
236 changes: 236 additions & 0 deletions research/src/fetch_campaign_donations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,236 @@
import os
import shutil
from datetime import date
from pathlib import Path
from zipfile import ZipFile

import pandas as pd
import requests
from tqdm import tqdm


BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DATA_PATH = os.path.join(BASE_DIR, 'data')
KEYS = ('candidates', 'parties', 'committees')
YEARS = range(2010, 2017, 2)


class Donation:
"""Context manager to download, read data from a given year and cleanup"""

URL = 'http://agencia.tse.jus.br/estatistica/sead/odsele/prestacao_contas'

ZIPNAMES = {
2010: 'prestacao_contas_2010.zip',
2012: 'prestacao_final_2012.zip',
2014: 'prestacao_final_2014.zip',
2016: 'prestacao_contas_final_2016.zip',
}

FILENAMES = {
2012: (
'receitas_candidatos_2012_brasil.txt',
'receitas_partidos_2012_brasil.txt',
'receitas_comites_2012_brasil.txt'
),
2014: (
'receitas_candidatos_2014_brasil.txt',
'receitas_partidos_2014_brasil.txt',
'receitas_comites_2014_brasil.txt'
),
2016: (
'receitas_candidatos_prestacao_contas_final_2016_brasil.txt',
'receitas_partidos_prestacao_contas_final_2016_brasil.txt',
None
)
}

NORMALIZE_COLUMNS = {
'candidates': {
'Descricao da receita': 'Descrição da receita',
'Especie recurso': 'Espécie recurso',
'Numero candidato': 'Número candidato',
'Numero do documento': 'Número do documento',
'Numero Recibo Eleitoral': 'Número Recibo Eleitoral',
'Sigla Partido': 'Sigla Partido'
},
'parties': {
'Sigla Partido': 'Sigla Partido',
'Número recibo eleitoral': 'Número Recibo Eleitoral'
},
'committees': {
'Sigla Partido': 'Sigla Partido',
'Tipo comite': 'Tipo Comite',
'Número recibo eleitoral': 'Número Recibo Eleitoral'
}
}

TRANSLATIONS = {
'Cargo': 'post',
'CNPJ Prestador Conta': 'accountable_company_id',
'Cod setor econômico do doador': 'donor_economic_setor_id',
'Cód. Eleição': 'election_id',
'CPF do candidato': 'candidate_cpf',
'CPF do vice/suplente': 'substitute_cpf',
'CPF/CNPJ do doador': 'donor_cnpj_or_cpf',
'CPF/CNPJ do doador originário':
'original_donor_cnpj_or_cpf',
'Data da receita': 'revenue_date',
'Data e hora': 'date_and_time',
'Desc. Eleição': 'election_description',
'Descrição da receita': 'revenue_description',
'Entrega em conjunto?': 'batch',
'Espécie recurso': 'type_of_revenue',
'Fonte recurso': 'source_of_revenue',
'Município': 'city',
'Nome candidato': 'candidate_name',
'Nome da UE': 'electoral_unit_name',
'Nome do doador': 'donor_name',
'Nome do doador (Receita Federal)':
'donor_name_for_federal_revenue',
'Nome do doador originário': 'original_donor_name',
'Nome do doador originário (Receita Federal)':
'original_donor_name_for_federal_revenue',
'Número candidato': 'candidate_number',
'Número candidato doador': 'donor_candidate_number',
'Número do documento': 'document_number',
'Número partido doador': 'donor_party_number',
'Número Recibo Eleitoral': 'electoral_receipt_number',
'Número UE': 'electoral_unit_number',
'Sequencial Candidato': 'candidate_sequence',
'Sequencial prestador conta': 'accountable_sequence',
'Sequencial comite': 'committee_sequence',
'Sequencial Diretorio': 'party_board_sequence',
'Setor econômico do doador': 'donor_economic_sector',
'Setor econômico do doador originário':
'original_donor_economic_sector',
'Sigla da UE': 'electoral_unit_abbreviation',
'Sigla Partido': 'party_acronym',
'Sigla UE doador': 'donor_electoral_unit_abbreviation',
'Tipo de documento': 'document_type',
'Tipo diretorio': 'party_board_type',
'Tipo doador originário': 'original_donor_type',
'Tipo partido': 'party_type',
'Tipo receita': 'revenue_type',
'Tipo comite': 'committee_type',
'UF': 'state',
'Valor receita': 'revenue_value'
}

def __init__(self, year):
self.year = year
self.zip_file = self.ZIPNAMES.get(year)
self.url = '{}/{}'.format(self.URL, self.zip_file)
self.directory, _ = os.path.splitext(self.zip_file)
self.path = Path(self.directory)

def _download(self):
"""Saves file from `url` into local `path` showing a progress bar"""
print('Downloading {}…'.format(self.url))
request = requests.get(self.url, stream=True)
total = int(request.headers.get('content-length', 0))
with open(self.zip_file, 'wb') as file_handler:
block_size = 2 ** 15 # ~ 32kB
kwargs = dict(total=total, unit='B', unit_scale=True)
with tqdm(**kwargs) as progress_bar:
for data in request.iter_content(block_size):
file_handler.write(data)
progress_bar.update(block_size)

def _unzip(self):
print('Uncompressing {}…'.format(self.zip_file))
with ZipFile(self.zip_file, 'r') as zip_handler:
zip_handler.extractall(self.directory)

def _read_csv(self, path, chunksize=None):
"""Wrapper to read CSV with default args and an optional `chunksize`"""
kwargs = dict(low_memory=False, encoding="ISO-8859-1", sep=';')
if chunksize:
kwargs['chunksize'] = 10000

data = pd.read_csv(path, **kwargs)
return pd.concat([chunk for chunk in data]) if chunksize else data

def _data_by_pattern(self, pattern):
"""
Given a glob pattern, loads all files matching this pattern, and then
concats them all in a single data frame
"""
data = [self._read_csv(name) for name in self.path.glob(pattern)]
return pd.concat(data)

def _data(self):
"""
Returns a dictionary with data frames for candidates, parties and
committees
"""
files = self.FILENAMES.get(self.year)
if not files: # it's 2010, a different file architecture
return {
'candidates': self._data_by_pattern('**/ReceitasCandidatos*'),
'parties': self._data_by_pattern('**/ReceitasPartidos*'),
'committees': self._data_by_pattern('**/ReceitasComites*')
}

paths = (
os.path.join(self.directory, filename)
for filename in files
if filename
)
return {
key: self._read_csv(path, chunksize=10000)
for key, path in zip(KEYS, paths)
if os.path.exists(path)
}

@property
def data(self):
"""Takes self._data, clean, normalizes and translate it"""
data = self._data()
for key in KEYS:
normalize_columns = self.NORMALIZE_COLUMNS.get(key)
if key in data:
# strip columns names ('foobar ' -> 'foobar')
names = data[key].columns.values
cleaned_columns = {name: name.strip() for name in names}
data[key].rename(columns=cleaned_columns, inplace=True)
# normalize & translate
data[key].rename(columns=normalize_columns, inplace=True)
data[key].rename(columns=self.TRANSLATIONS, inplace=True)
return data

def __enter__(self):
self._download()
self._unzip()
return self

def __exit__(self, exc_type, exc_value, traceback):
print('Cleaning up source files from {}…'.format(self.year))
os.remove(self.zip_file)
shutil.rmtree(self.directory)


def save(key, data):
"""Given a key and a data frame, saves it compressed in LZMA"""
if not os.path.exists(DATA_PATH):
os.makedirs(DATA_PATH)

prefix = date.today().strftime('%Y-%m-%d')
filename = '{}-donations-{}.xz'.format(prefix, key)
print('Saving {}…'.format(filename))
data.to_csv(os.path.join(DATA_PATH, filename), compression='xz')


def fetch_data_from(year):
with Donation(year) as donation:
return donation.data


if __name__ == '__main__':
by_year = tuple(fetch_data_from(year) for year in YEARS)
for key in KEYS:
data = pd.concat([
dataframes.get(key) for dataframes in by_year
if isinstance(dataframes.get(key), pd.DataFrame)
])
save(key, data)