Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Parse Atom RSS feeds #1171

Open
3 of 9 tasks
sclu1034 opened this issue Jul 6, 2023 · 8 comments
Open
3 of 9 tasks

Feature Request: Parse Atom RSS feeds #1171

sclu1034 opened this issue Jul 6, 2023 · 8 comments

Comments

@sclu1034
Copy link

sclu1034 commented Jul 6, 2023

Type

  • General question or discussion
  • Propose a brand new feature
  • Request modification of existing behavior or design

What is the problem that your feature request solves

Generic Atom-based RSS feeds (see https://datatracker.ietf.org/doc/html/rfc4287) cannot be parsed by the current parsers.

Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes

Ideally, a library like feedparser would be used, both as a more robust solution than the current hand-rolled regex parsers, and something that already supports a wide range of feed formats.

What hacks or alternative solutions have you tried to solve the problem?

I wrote a script to use that library myself, and turn the feed info into JSON that I can pipe into archivebox add --parser json.

How badly do you want this new feature?

  • It's an urgent deal-breaker, I can't live without it
  • It's important to add it in the near-mid term future
  • It would be nice to have eventually

  • I'm willing to contribute dev time / money to fix this issue
  • I like ArchiveBox so far / would recommend it to a friend
  • I've had a lot of difficulty getting ArchiveBox set up
@melyux
Copy link

melyux commented Jul 12, 2023

Please do this. I'm surprised this project isn't using a proper RSS parser. I just spent the entire day writing regex to pick out the random RSS and W3 links that ArchiveBox keeps pulling out of my RSS feeds somehow.

@sclu1034
Copy link
Author

the random RSS and W3 links that ArchiveBox keeps pulling out of my RSS feeds somehow

That's because the RSS parsers fail, hand over to the next parser, and eventually the txt parser gets a shot, which just takes everything that looks like a URL.

Here's the script I use as a workaround to parse feeds into JSON first:

#!/usr/bin/env python3

import feedparser
import sys
import json

dom = feedparser.parse(sys.argv[1])

links = []

for entry in dom.entries:
    tags = ",".join(map(lambda tag: tag.term, entry.get('tags', [])))
    link = {
        'url': entry.link,
        'title': entry.title,
        'tags': tags,
        'description': entry.summary,
        # 'created': entry.published,
    }

    links.append(link)

print(json.dumps(links))
script.py <feed_url> | archivebox add --parser json

@melyux
Copy link

melyux commented Jul 20, 2023

Wonder if there's a way to use a script like this in the scheduler... I guess not officially, would be easier to just fix the parsers if that's the way... but maybe I can modify the crontab directly to use the script. Let's see

@melyux
Copy link

melyux commented Jul 20, 2023

Wow feedparser is incredible, takes anything I throw at it. Could be an easy drop-in @pirate?

@melyux
Copy link

melyux commented Jul 21, 2023

Modified the crontab manually, and it works. I put the feedparse.py script into the container's mounted /data directory. I wrote a new Dockerfile to add the feedparser Python package into the image (along with the newer version of SingleFile, see #883) and used the local image. Here's the crontab mounted into the scheduler container:

@daily cd /data && /usr/local/bin/archivebox add --parser json "$(/data/feedparse.py 'https://www.domain.com/feed.rss')" >> /data/logs/schedule.log 2>&1 # archivebox_schedule

The format you suggested above with the | pipe works but ArchiveBox can't parse it from the crontab, but it is able to parse my version above in a way that's runnable by the archivebox schedule --run-all command.

The Dockerfile file:

FROM archivebox/archivebox:dev
RUN npm install -g single-file-cli
RUN pip install feedparser

after which we edit the docker-compose.yml block for archivebox to remove the image: archivebox/archivebox line and instead do build: ./archivebox (or whatever path you stored your new Dockerfile in). You can add image: archivebox to the block to make this custom image available for the scheduler container too, where you can also use image: archivebox to use this custom image.

Also in this block, set the env variable SINGLEFILE_BINARY=/usr/bin/single-file to use the newer SingleFile version's path, since npm installs it in a different one than the default path.

@pirate
Copy link
Member

pirate commented Aug 16, 2023

Sorry for causing you so much extra overhead / debugging time to have to resort to this workaround @melyux, but thanks for documenting your process here for others!

All my dev focus is currently on a refactor I have in progress to add Huey support to ArchiveBox, which has left a few of these relatively big issues to languish. I appreciate everyone's patience while I give some much-needed attention to the internal architecture!

jimwins added a commit to jimwins/ArchiveBox that referenced this issue Feb 25, 2024
The feedparser packages has 20 years of history and is very good at parsing
RSS and Atom, so use that instead of ad-hoc regex and XML parsing.

The medium_rss and shaarli_rss parsers weren't touched because they are
probably unnecessary. (The special parse for pinboard is just needing because
of how tags work.)

Doesn't include tests because I haven't figured out how to run them in the
docker development setup.

Fixes ArchiveBox#1171
jimwins added a commit to jimwins/ArchiveBox that referenced this issue Mar 1, 2024
The feedparser packages has 20 years of history and is very good at parsing
RSS and Atom, so use that instead of ad-hoc regex and XML parsing.

The medium_rss and shaarli_rss parsers weren't touched because they are
probably unnecessary. (The special parse for pinboard is just needing because
of how tags work.)

Doesn't include tests because I haven't figured out how to run them in the
docker development setup.

Fixes ArchiveBox#1171
pirate added a commit that referenced this issue Mar 14, 2024
Fixes #1171
Fixes #870 (probably, would need to test against a Wallabag Atom file to
Fixes #135
Fixes #123
Fixes #106
@pirate
Copy link
Member

pirate commented Mar 27, 2024

This should work now that we switched to feedparser, let me know if ya'll still have issues on the latest :dev build and I'll reopen.

@pirate pirate closed this as completed Mar 27, 2024
@Ramblurr
Copy link

Ramblurr commented Apr 11, 2024

@pirate I have several feeds I'd like to parse using this feature, but none of them are working. I'm using the :dev tag (image hash 5c0d2df58cf7ac9aa314129de16121c991172e34081ce132e2575fe7160d5b1b)

Samples are attached (.txt added to bypass github mimetype detector)

all.rss.txt

user-favorites.xml.txt

The sources are:

@pirate pirate reopened this Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants