-
-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC2732 support for urlparse (IPv6 addresses) #47236
Comments
The urlparse module's ways of splitting the location into hostname and >>> import urlparse
>>> urlparse.urlparse('http://[::1]:80/').hostname
'['
>>> urlparse.urlparse('http://[::1]:80/').port
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/urlparse.py", line 116, in port
return int(port, 10)
ValueError: invalid literal for int() with base 10: ':1]:80'
>>> A simple fix is attached, but probably requires a little more thought. |
I have written this patch because urlparse could not retrieve the This problem happens with Python 2.5.1 in Fedora 9, and I have also It still needs some polishing and thinking: See the places marked Might require some more comprehensive thought about how Python wants On a not-totally-unrelated note, someone should examine whether IRIs[1] [1] RFC 3987 - Internationalized Resource Identifiers (IRIs) |
Hi, with python-2.6.2-2.fc12.i686 In: x ="http://www.somesite.com/images/rubricas/" In: urlparse.urljoin(x, './07.11.2009-9:54:12-1.jpg') urlparse.urlparse('07.11.2009-9:54:12-1.jpg')
is wrong
but
urlparse.urlparse('./07.11.2009-9:54:12-1.jpg')
isn't. think about that please |
okay, this should be easy to address. But the more important part is RFC compliance so that this simple change does not break many other things in the wild. |
I've created a patch for parse.py against the py3k branch, and I've also included ndim's test cases in that patch file. When returning the host name of an IPv6 literal, I don't include the surrounding '[' and ']'. For example, parsing http://[::1]:5432/foo/ gives the host name '::1'. |
Seems sensible: Delimiters are not part of components. |
I think parsing should be a bit more careful. For example, what happens when you give 'http://dead:beef::]/foo/' as input (note the missing opening bracket)? |
By the way, updating the RFC list as done in python-urlparse-rfc2732-rfc-list.patch is also a good idea. |
Isn’t “http://dead:beef::]/foo/“ and invalid URI? Regarding doc, see also bpo-5650. |
That's the point, it shouldn't parse as a valid one IMO. |
Regarding the RFC list issue, I've posted a new patch with a new RFC list that combines ndim's list and the comments from bpo-5650. Pitrou argues that http://dead:beef::]/foo/ should fail because it's a malformed URL. My response would be that the parse() function has historically assumed that a URL is well formed, and so this change to accommodate IPv6 should continue to assume the URL is well formed. I'd say that a separate bug should be raised if it's thought that parse() should be changed to check that any URL is well-formed. |
With respect to msg98314 (http://bugs.python.org/msg98314) referenced in this bug, which I thought is easy to handle, does not appear so. It is bit tricky. The problem is the relative url is given of the format '07.11.2009-9:54:12-1.jpg' and urlparse wrongly assumes that it is VALID url with the scheme as 07.11.2009-9 ( Surprisingly, this falls under valid characters for a URL Scheme, but we know that there no url scheme like that). But when you give ./07.11.2009-9, ./ is identified a relative path and urljoin happens properly. My inclination for this specific msg9814, is the allow the user to give the proper path like ./07.11.2009-9 or use urljoin from different directory, images/07.11.2009-9 and this should handle it. This date-time relative url is not a typical scenario, but for typical scnerios, urlparse behaves as expected. >>> x = 'http://a.b.c'
>>> urlparse.urljoin(x,'foo')
'http://a.b.c/foo'
>>> urlparse.urljoin(x,'./foo')
'http://a.b.c/foo'
>>> I shall provide my comments on the IPv6 parse in next msg. |
After spending a sufficient amount of time looking at patches and the RFC 2732, I tend to agree with the patch provided by tlocke. It does cover the behavior for parsing IPv6 URL with '[' hostname ']'. RFC 2732 is very short and just says that hostname in the IPv6 should not have '[' and ']' characters. The patch does just that, which is fine. If hard pressed on detecting invalid IPv6 , I would add and extra + if "[" in netloc and "]" in netloc: Which should take care of Invalid IPv6 urls as discussed in this bug.
Also regarding the urlparse header docs, (it was long pending on me and sorry), here is a patch for current one for review. When we address this bug, I shall include RFC 2732 as well in the list. |
Just thought I'd point out that RFC2732 was obsoleted by RFC3986 http://www.rfc-editor.org/rfc/rfc3986.txt |
Hello Thanks for the precision. This particular topic is discussed on bpo-5650, feel free to help there! Better update the code before the doc, though. Regards |
Actually, this bug is just for parsing the IPv6 url. We are having the |
Final patch with inclusion of detecting invalid urls at netloc and hostname level, tests and NEWS entry. |
This is ok with me. |
Committed into trunk in revision 80101 |
merged into py3k in revision 80102 and release31-maint in revision 80103. Thanks for the patches, Tony and Hans. I have acknowledged it in NEWS file too. |
Reverted the check-in made to 3.1 maint (in r80104). Features should not go in there. |
I posted this to the checkins list, but for reference, the following invalid URL should be added to the test cases:
|
Moving the Bad URL check to a higher level can be detect the bad urls much better. Once I the netloc is parsed and obtained, invalid URL can be checked. I am attaching an update with the new test included. |
I don't know how deep you want to get into detecting invalid URIs, but with the new patch this one causes a parsing error that is probably worth dealing with: Maybe a reasonable set of checks would be (in hostname) that if the part of the netloc after the @ contains a ']' or a '[', then it must start with a [ and either end with a ] or contain a ']:'. I can also mess up your new checks with something like this: or even: although those don't fail, they just faithfully produce the nonsensical results implicit in the invalid urls. I think the above check logic in hostname would catch them, but it wouldn't catch this one: That may be OK, though, since as you noted earlier we aren't doing full URI validation. Oh, and I notice that your test only covers the 'fast' path code, it doesn't exercise the general URI logic. (Sorry I didn't review this issue earlier.) |
I added an additional invalid test which David pointed out and made changes to invalid url checking code. I moved it more higher level.
Now, other forms of Invalid URLs are possible as David points out (and possibly more too), but leaving it is better as it would unnecessarily add syntax-checks at various different places (instead of a single place), without much of value add. Dealing with Valid URLs and a parse logic checking should be fine. commits: trunk - r80277 and py3k - r80278 |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: