Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: spiders: add all available publishers from Mexico Plataforma Digital Nacional #1078

Merged
merged 4 commits into from
Apr 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
42 changes: 42 additions & 0 deletions docs/spiders.rst
Original file line number Diff line number Diff line change
Expand Up @@ -768,6 +768,20 @@ Mexico

scrapy crawl mexico_mexico_state_infoem

.. autoclass:: kingfisher_scrapy.spiders.mexico_mexico_state_sesaemm_plataforma_digital_nacional.MexicoMexicoStateSESAEMMPlataformaDigitalNacional
jpmckinney marked this conversation as resolved.
Show resolved Hide resolved
:no-members:

.. code-block:: bash

scrapy crawl mexico_mexico_state_sesaemm_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_michoacan_sesea_plataforma_digital_nacional.MexicoMichoacanSESEAPlataformaDigitalNacional
:no-members:

.. code-block:: bash

scrapy crawl mexico_michoacan_sesea_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_nuevo_leon_cotai.MexicoNuevoLeonCOTAI
:no-members:

Expand All @@ -789,6 +803,13 @@ Mexico

scrapy crawl mexico_nuevo_leon_releases

.. autoclass:: kingfisher_scrapy.spiders.mexico_puebla_state_seseap_plataforma_digital_nacional.MexicoPueblaStateSESEAPlataformaDigitalNacional
:no-members:

.. code-block:: bash

scrapy crawl mexico_puebla_state_seseap_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_quien_es_quien_releases.MexicoQuienEsQuienReleases
:no-members:

Expand All @@ -803,6 +824,20 @@ Mexico

scrapy crawl mexico_quintana_roo_idaip

.. autoclass:: kingfisher_scrapy.spiders.mexico_quintana_roo_sesaeqroo_plataforma_digital_nacional.MexicoQuintanaRooSESAEQROOPlataformaDigitalNacional
:no-members:

.. code-block:: bash

scrapy crawl mexico_quintana_roo_sesaeqroo_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_shcp_plataforma_digital_nacional.MexicoSHCPPlataformaDigitalNacional
:no-members:

.. code-block:: bash

scrapy crawl mexico_shcp_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_sinaloa_ceaip.MexicoSinaloaCEAIP
:no-members:

Expand All @@ -817,6 +852,13 @@ Mexico

scrapy crawl mexico_veracruz_ivai

.. autoclass:: kingfisher_scrapy.spiders.mexico_veracruz_state_sesea_plataforma_digital_nacional.MexicoVeracruzStateSESEAPlataformaDigitalNacional
:no-members:

.. code-block:: bash

scrapy crawl mexico_veracruz_state_sesea_plataforma_digital_nacional

.. autoclass:: kingfisher_scrapy.spiders.mexico_yucatan_inaip.MexicoYucatanINAIP
:no-members:

Expand Down
11 changes: 7 additions & 4 deletions kingfisher_scrapy/base_spiders/base_spider.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,10 +278,13 @@ def build_request(self, url, formatter, **kwargs):
:returns: a Scrapy request
:rtype: scrapy.Request
"""
file_name = formatter(url)
if not file_name.endswith(('.json', '.csv', '.xlsx', '.rar', '.zip')):
file_name += '.json'
meta = {'file_name': file_name}
meta = {}
if formatter is None:
assert kwargs['meta']['file_name']
else:
meta['file_name'] = formatter(url)
if not meta['file_name'].endswith(('.json', '.csv', '.xlsx', '.rar', '.zip')):
meta['file_name'] += '.json'
if 'meta' in kwargs:
meta.update(kwargs.pop('meta'))
return scrapy.Request(url, meta=meta, **kwargs)
Expand Down
20 changes: 16 additions & 4 deletions kingfisher_scrapy/base_spiders/index_spider.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,16 @@ class IndexSpider(SimpleSpider):
#. If the ``page`` query string parameter is zero-indexed, set ``start_page = 0``.
#. Set ``formatter`` to set the file name like in :meth:`~kingfisher_scrapy.base_spiders.BaseSpider.build_request`.
If ``page_count_pointer`` or ``use_page = True``, it defaults to ``parameters(<param_page>)``. Otherwise, if
``result_count_pointer`` is set and ``use_page = False``, it defaults to ``parameters(<param_offset>)``.
``result_count_pointer`` is set and ``use_page = False``, it defaults to ``parameters(<param_offset>)``. If
``formatter = None``, the ``url_builder()`` method must ``return url, {'meta': {'file_name': ...}, ...}``.
#. Write a ``start_requests()`` method to yield the initial URL. The request's ``callback`` parameter should be set
to ``self.parse_list``.

If neither ``page_count_pointer`` nor ``result_count_pointer`` can be used to create the URLs (e.g. if you need to
query a separate URL that does not return JSON), you need to define ``range_generator()`` and ``url_builder()``
methods. ``range_generator()`` should return page numbers or offset numbers. ``url_builder()`` receives a page or
offset from ``range_generator()``, and returns a URL to request.
offset from ``range_generator()``, and returns either a request URL, or a tuple of a request URL and keyword
arguments (to pass to :meth:`~kingfisher_scrapy.base_spiders.BaseSpider.build_request`).

If the results are in ascending chronological order, set ``chronological_order = 'asc'``.

Expand Down Expand Up @@ -112,8 +114,18 @@ def parse_list(self, response):
# https://doc.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request
if self.chronological_order == 'desc':
priority *= -1
yield self.build_request(self.url_builder(value, data, response), formatter=self.formatter,
priority=priority, callback=self.parse_list_callback)
return_value = self.url_builder(value, data, response)
if isinstance(return_value, tuple):
url, kwargs = return_value
else:
url, kwargs = return_value, {}
yield self.build_request(
url,
formatter=self.formatter,
priority=priority,
callback=self.parse_list_callback,
**kwargs,
)

def parse_list_loader(self, response):
return response.json()
Expand Down
3 changes: 1 addition & 2 deletions kingfisher_scrapy/spiders/ecuador_sercop_bulk.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,4 @@ class EcuadorSERCOPBulk(CompressedFileSpider, PeriodicSpider):
formatter = staticmethod(components(-1))

def build_request(self, url, formatter, **kwargs):
meta = {'meta': {'file_name': f'{formatter(url)}.zip'}}
return super().build_request(url, formatter, **meta)
return super().build_request(url, formatter, meta={'file_name': f'{formatter(url)}.zip'}, **kwargs)
Original file line number Diff line number Diff line change
@@ -1,12 +1,7 @@
import json
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase

import scrapy

from kingfisher_scrapy.base_spiders import IndexSpider
from kingfisher_scrapy.util import handle_http_error


class MexicoAguascalientesSESEAPlataformaDigitalNacional(IndexSpider):
class MexicoAguascalientesSESEAPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Estatal Anticorrupción de Aguascalientes (SESEA) - Plataforma Digital Nacional
Expand All @@ -15,29 +10,5 @@ class MexicoAguascalientesSESEAPlataformaDigitalNacional(IndexSpider):
"""
name = 'mexico_aguascalientes_sesea_plataforma_digital_nacional'

# BaseSpider
root_path = 'data.item'

# SimpleSpider
data_type = 'release'

# IndexSpider
limit = '/pagination/pageSize'
result_count_pointer = '/pagination/total'
start_page = 0
use_page = True

# Local
url = 'https://api.plataformadigitalnacional.org/s6/api/v1/search?supplier_id=SESEA_AGS'

def start_requests(self):
yield scrapy.Request(self.url, meta={'file_name': 'page-0.json'}, callback=self.parse_list, method='POST')

@handle_http_error
def parse_list(self, response):
data = self.parse_list_loader(response)
yield from self.parse(response)
for value in self.range_generator(data, response):
payload = json.dumps({'page': value, 'pageSize': 10})
yield scrapy.Request(self.url, body=payload, meta={'file_name': f'page-{value}.json'}, method='POST',
headers={'Accept': 'application/json', 'Content-Type': 'application/json'})
# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESEA_AGS'
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoMexicoStateSESAEMMPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Estatal Anticorrupción del Estado de México y Municipios (SESAEMM) (Mexico) -
Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_mexico_state_sesaemm_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESAEMM_EDOMEX'
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoMichoacanSESEAPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Estatal Anticorrupción del Estado de Michoacán (SESEA) (Mexico) -
Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_michoacan_sesea_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESEA_MCH'
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import json

import scrapy

from kingfisher_scrapy.base_spiders import IndexSpider


class MexicoPlataformaDigitalNacionalBase(IndexSpider):
# BaseSpider
root_path = 'data.item'

# SimpleSpider
data_type = 'release'

# IndexSpider
result_count_pointer = '/pagination/total'
limit = '/pagination/pageSize'
use_page = True
start_page = 0
formatter = None

# Local
url_prefix = 'https://api.plataformadigitalnacional.org/s6/api/v1/search?supplier_id='

# publisher_id must be provided by subclasses.

def start_requests(self):
yield scrapy.Request(
f'{self.url_prefix}{self.publisher_id}',
method='POST',
meta={'file_name': 'page-0.json'},
callback=self.parse_list,
)

def url_builder(self, value, data, response):
return f'{self.url_prefix}{self.publisher_id}', {
'method': 'POST',
'headers': {'Accept': 'application/json', 'Content-Type': 'application/json'},
'body': json.dumps({'page': value, 'pageSize': 10}),
'meta': {'file_name': f'page-{value}.json'},
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoPueblaStateSESEAPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Estatal Anticorrupción del Estado de Puebla (SESEAP) (Mexico) -
Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_puebla_state_seseap_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESAE_PUE'
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoQuintanaRooSESAEQROOPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Anticorrupción del Estado de Quintana Roo (SESAEQROO) (Mexico) -
Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_quintana_roo_sesaeqroo_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESAE_QROO'
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoSHCPPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría de Hacienda y Crédito Público (SHCP) (Mexico) - Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_shcp_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SHCP'
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from kingfisher_scrapy.spiders.mexico_plataforma_digital_nacional_base import MexicoPlataformaDigitalNacionalBase


class MexicoVeracruzStateSESEAPlataformaDigitalNacional(MexicoPlataformaDigitalNacionalBase):
"""
Domain
Secretaría Ejecutiva del Sistema Estatal Anticorrupción de Veracruz de Ignacio de la Llave (SESEA) (Mexico) -
Plataforma Digital Nacional
Bulk download documentation
https://plataformadigitalnacional.org/contrataciones
"""
name = 'mexico_veracruz_state_sesea_plataforma_digital_nacional'

# MexicoPlataformaDigitalNacionalBase
publisher_id = 'SESEA_VER'
6 changes: 3 additions & 3 deletions kingfisher_scrapy/spiders/peru_osce_bulk.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,18 @@ class PeruOSCEBulk(CompressedFileSpider, IndexSpider):
# IndexSpider
formatter = staticmethod(components(-1))
page_count_pointer = '/pagination/num_pages'
parse_list_callback = 'parse_files_list'
parse_list_callback = 'parse_page'

peru_base_url = 'https://contratacionesabiertas.osce.gob.pe/api/v1/files?page={0}&paginateBy=10&format=json'

def start_requests(self):
yield scrapy.Request(self.peru_base_url.format(1),
meta={'file_name': 'list.json'}, callback=self.parse_list)

def pages_url_builder(self, value, data, response):
def url_builder(self, value, data, response):
return self.peru_base_url.format(value)

@handle_http_error
def parse_files_list(self, response):
def parse_page(self, response):
for item in response.json()['results']:
yield scrapy.Request((item['files']['json']), meta={'file_name': 'data.zip'})