You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This pattern will never match a URL passed to can_fetch(), as far as I
can tell.
It's arguable whether this is a bug. The 1994 robots.txt protocol is
silent on whether to treat query strings specially and just says "any
URL that starts with this value will not be retrieved". The 1997 draft
standard talks about the path portion of a URL but doesn't give any
examples about how to treat the '?' character in a robots.txt pattern.
I'll leave aside whether to implement pattern matching, but it seems
like a good idea to do something reasonable when a robots.txt pattern
contains a literal '?', and treating it as a literal character seems
simplest.
Cause: in robotparser.can_fetch(), there is this code which seems to
take only the path (stripping the query string).
Also, when parsing patterns in the robots.txt file, a '?' character
seems to be automatically URL-escaped. There's nothing in a standards
doc about doing this so I think that might be a bug too.
Tested with python 2.4. I looked at the code in Subversion head and it
doesn't look like there were any changes on the trunk.
I modified the patch slightly (so that it takes care of path, query, params and fragments).
Fixed in r83209,r83210 and r83211.
I also think that we need to move the robotparser to allow regexs in the allow and disallow patterns. ( Shall open an issue in the tracker, if it is not already present).
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: