Issue 43883: Making urlparse WHATWG conformant (original) (raw)
Issue43883
Created on 2021-04-18 19:43 by orsenthil, last changed 2022-04-11 14:59 by admin.
Messages (4) | ||
---|---|---|
msg391344 - (view) | Author: Senthil Kumaran (orsenthil) * ![]() |
Date: 2021-04-18 19:43 |
Mike Lissner reported that a set test suites that exercise extreme conditions with URLs, but in conformance with url.spec.whatwg.org was maintained here: https://github.com/web-platform-tests/wpt/tree/77da471a234e03e65a22ee6df8ceff7aaba391f8/url These test cases were used against urlparse and urljoin method. https://gist.github.com/mlissner/4d2110d7083d74cff3893e261a801515 Quoting verbatim ``` The basic idea is to iterate over the test cases and try joining and parsing them. The script wound up messier than I wanted b/c there's a fair bit of normalization you have to do (e.g., the test cases expect blank paths to be '/', while urlparse returns an empty string), but you'll get the idea. The bad news is that of the roughly 600 test cases fewer than half pass. Some more normalization would fix some more of this, and I don't imagine all of these have security concerns (I haven't thought through it, honestly, but there are issues with domain parsing too that look meddlesome). For now I've taken it as far as I can, and it should be a good start, I think. The final numbers the script cranks out are: Done. 231/586 successes. 1 skipped. ``` | ||
msg391347 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * ![]() |
Date: 2021-04-18 22:01 |
It would be interesting to test also with the yarl module. It is based on urlparse and urljoin, but does extra normalization of %-encoding. | ||
msg391427 - (view) | Author: STINNER Victor (vstinner) * ![]() |
Date: 2021-04-20 10:41 |
See also bpo-43882. | ||
msg392969 - (view) | Author: Gregory P. Smith (gregory.p.smith) * ![]() |
Date: 2021-05-05 01:34 |
FWIW rather than implementing our own URL parsing at all... wrapping a library extracted from a compatible-license major browser (Chromium or Firefox) and keeping it updated would avoid disparities. Unfortunately, I'm not sure how feasible this really is. Do all of the API surfaces we must support in the stdlib for compatibility's sake with urllib line up with such a browser core URL parsing library? Something to ponder. Unlikely something we'll actually do. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:59:44 | admin | set | github: 88049 |
2021-05-05 01:34:26 | gregory.p.smith | set | messages: + |
2021-04-23 19:38:04 | gregory.p.smith | set | nosy: + gregory.p.smith |
2021-04-20 10:41:16 | vstinner | set | nosy: + vstinnermessages: + |
2021-04-19 20:24:35 | Mike.Lissner | set | nosy: + Mike.Lissner |
2021-04-19 03:24:56 | xtreak | set | nosy: + xtreak |
2021-04-18 22:01:16 | serhiy.storchaka | set | nosy: + serhiy.storchakamessages: + |
2021-04-18 19:43:37 | orsenthil | create |