Skip to content

Commit

Permalink
Merge pull request #219 from PyFeeds/dev/nblock
Browse files Browse the repository at this point in the history
Expand output_path and enhance documentation
  • Loading branch information
Lukas0907 committed May 16, 2020
2 parents 02fc464 + 5352042 commit d1885d5
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 20 deletions.
22 changes: 11 additions & 11 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,16 @@ long gone. The once iconic orange RSS icon has been replaced by "social share"
buttons.

Feeds aims to bring back the good old reading times. It creates Atom feeds for
websites that don't offer them (anymore). It allows you to read new articles
of your favorite websites in your feed reader (e.g. `Tiny Tiny RSS
<https://tt-rss.org>`_) even if this is not officially supported by the
website.
websites that don't offer them (anymore). It allows you to read new articles of
your favorite websites in your feed reader (e.g. TinyTinyRSS_) even if this is
not officially supported by the website.

Furthermore it can also enhance existing feeds by inlining the actual content
into the feed entry so it can be read without leaving the feed reader.

Feeds is based on Scrapy_, a framework for extracting data from websites, and
it's easy to add support for new websites. Just take a look at the existing
spiders in ``feeds/spiders`` and feel free to open a pull request!
spiders_ and feel free to open a `pull request`_!

Documentation
-------------
Expand Down Expand Up @@ -137,12 +136,10 @@ Pull requests
* Create a topic branch and make your desired changes.
* Open a pull request. Make sure the travis checks are passing.

Author
------

Feeds is written and maintained by `Florian Preinstorfer
<https://nblock.org>`_ and
`Lukas Anzinger <https://www.notinventedhere.org>`_.
Authors
-------
Feeds is written and maintained by `Florian Preinstorfer <https://nblock.org>`_
and `Lukas Anzinger <https://www.notinventedhere.org>`_.

License
-------
Expand All @@ -153,6 +150,9 @@ AGPL3, see `LICENSEFILE`_ for details.
.. _issue tracker: https://github.com/pyfeeds/pyfeeds/issues
.. _new issue: https://github.com/pyfeeds/pyfeeds/issues/new
.. _Scrapy: https://www.scrapy.org
.. _TinyTinyRSS: https://tt-rss.org
.. _pull request: https://pyfeeds.readthedocs.io/en/latest/contribute.html
.. _spiders: https://github.com/PyFeeds/PyFeeds/tree/master/feeds/spiders
.. _Falter: https://pyfeeds.readthedocs.io/en/latest/spiders/falter.at.html
.. _Konsument: https://pyfeeds.readthedocs.io/en/latest/spiders/konsument.at.html
.. _LWN: https://pyfeeds.readthedocs.io/en/latest/spiders/lwn.net.html
Expand Down
31 changes: 27 additions & 4 deletions docs/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,12 @@ List one spider per line.
tvthek.orf.at
oe1.orf.at
Use ``feeds list`` to get a list of all available spiders.

output_path
~~~~~~~~~~~
This is the path where the generated Atom feeds will be saved.
This is the path where the generated Atom feeds will be saved. You may serve
this directory with any webserver.

.. code-block:: ini
Expand All @@ -50,6 +53,26 @@ https://validator.w3.org/feed/docs/warning/MissingSelf.html
[feeds]
output_url = https://example.com/feeds
truncate_words
~~~~~~~~~~~~~~
Truncate content to 10 words instead of including the full text. This can be
useful if generated feeds should be made publicly available.

.. code-block:: ini
[feeds]
truncate_words = 10
remove_images
~~~~~~~~~~~~~
Remove images from output. This can be useful if generated feeds should be made
publicly available.

.. code-block:: ini
[feeds]
remove_images = 1
cache_enabled
~~~~~~~~~~~~~
Feeds can be configured to use a cache for HTTP responses which is highly
Expand All @@ -68,7 +91,7 @@ The path where cache data is stored.
.. code-block:: ini
[feeds]
cache_dir = .cache
cache_dir = ~/.cache/feeds
cache_expires
~~~~~~~~~~~~~
Expand All @@ -81,8 +104,8 @@ Expire (remove) entries from cache after 90 days.
Spider specific settings
------------------------
Some spiders support additional settings. Head over to the Supported Websites
section for more information on spider specific settings.
Some spiders support additional settings. Head over to the :ref:`Supported
Websites` section for more information on spider specific settings.

.. _example configuration:

Expand Down
11 changes: 7 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,13 @@ websites that don't offer them (anymore). It allows you to read new articles of
your favorite websites in your feed reader (e.g. TinyTinyRSS_) even if this is
not officially supported by the website.

Furthermore it can also enhance existing feeds by inlining the actual content
into the feed entry so it can be read without leaving the feed reader.

Feeds is based on Scrapy_, a framework for extracting data from websites and it
has support for a few websites already, see :ref:`Supported Websites`. It's
easy to add support for new websites. Just take a look at the existing spiders
in ``feeds/spiders`` and feel free to open a :ref:`pull request <Contribute>`!
easy to add support for new websites. Just take a look at the existing spiders_
and feel free to open a :ref:`pull request <Contribute>`!

Related work
------------
Expand All @@ -49,8 +52,8 @@ Related work
Authors
-------
Feeds is written and maintained by `Florian Preinstorfer <https://nblock.org>`_
and `Lukas Anzinger <https://www.notinventedhere.org>`_ (`@LukasAnzinger`_).
and `Lukas Anzinger <https://www.notinventedhere.org>`_.

.. _Scrapy: https://www.scrapy.org
.. _TinyTinyRSS: https://tt-rss.org
.. _@LukasAnzinger: https://twitter.com/LukasAnzinger
.. _spiders: https://github.com/PyFeeds/PyFeeds/tree/master/feeds/spiders
3 changes: 2 additions & 1 deletion feeds/pipelines.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import os
import uuid
from datetime import datetime, timezone

Expand Down Expand Up @@ -68,7 +69,7 @@ class AtomExportPipeline(object):
"""Export items as atom feeds."""

def __init__(self, output_path, output_url):
self._output_path = output_path
self._output_path = os.path.expanduser(output_path)
self._output_url = output_url
self._exporters = {}

Expand Down

0 comments on commit d1885d5

Please sign in to comment.