Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduction in feedparser docs needs new URL #71

Closed
MattDMo opened this issue May 31, 2016 · 3 comments · Fixed by #355
Closed

Introduction in feedparser docs needs new URL #71

MattDMo opened this issue May 31, 2016 · 3 comments · Fixed by #355

Comments

@MattDMo
Copy link

MattDMo commented May 31, 2016

From the very first code bit on the Introduction page:

>>> import feedparser
>>> d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')

All seems to work OK, except:

>>> d["feed"]["title"]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/feedparser.py", line 357, in __getitem__
    return dict.__getitem__(self, key)
KeyError: 'title'

The source of atom10.xml is

<!DOCTYPE html>
<body style="padding:0; margin:0;">
<html>
<body>
    <iframe src="http://mcc.godaddy.com/park/p3WlpJAhMJMlMF5vMKD=" style="visibility: visible;height: 100%; position:absolute" allowtransparency="true" marginheight="0" marginwidth="0" frameborder="0" width="100%">
    </iframe>
</body>
</html>

(I formatted it, as it was all on one line). From what I can gather, the domain is parked but has no content. So, if you own the domain, could you an acceptable RSS feed file at the indicated URL? If not, could you find another sample RSS feed to use? You could probably just post a file on Github or something.

Thanks!

@MattDMo
Copy link
Author

MattDMo commented May 31, 2016

OK, I didn't read far enough (this is my first time going through the tutorial). Apparently this page has a non-functional feedparser.org URL as well. I don't have time tonight to go through all the .rst files and search for bad URLs, but it's a trivial exercise if you know how to grep 😀

@buhtz
Copy link

buhtz commented Apr 26, 2019

Related to #166

@kurtmckee
Copy link
Owner

Thanks for bringing this to my attention! I thought that I did a grep for this years ago but I guess I didn't. =(

I'll get this fixed soon!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants