Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Helper functions for saving and querying an SQL database. Updates
the schema automatically according to the data you save.

Currently only supports SQLite. It will make a local SQLite database.
It is based on [SQLAlchemy](https://pypi.python.org/pypi/SQLAlchemy).
It is based on `SQLAlchemy <https://pypi.python.org/pypi/SQLAlchemy>`_.
You should expect it to support other SQL databases at a later date.

scraperwiki.sql.save(unique_keys, data[, table_name="swdata"])
Expand Down Expand Up @@ -85,7 +85,7 @@ Miscellaneous
-------------

scraperwiki.status(type, message=None)
If run on the ScraperWiki platform (the new one, not Classic), updates the visible status of the dataset. If not on the platform, does nothing. ``params`` can be 'ok' or 'error'. If no ``message`` is given, it will show the time since the update. See `dataset status API <https://scraperwiki.com/help/developer#boxes-status>`_ in the documentation for details.
If run on the ScraperWiki platform (the new one, not Classic), updates the visible status of the dataset. If not on the platform, does nothing. ``type`` can be 'ok' or 'error'. If no ``message`` is given, it will show the time since the update. See `dataset status API <https://scraperwiki.com/help/developer#boxes-status>`_ in the documentation for details.

scraperwiki.pdftoxml(pdfdata)
Convert a byte string containing a PDF file into an XML file containing the coordinates and font of each text string (see `the pdftohtml documentation <http://linux.die.net/man/1/pdftohtml>`_ for details). This requires ``pdftohtml`` which is part of ``poppler-utils``.
Expand Down