-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC of implementation to show large output by wrapping up long lines #42
Conversation
Thanks for the contribution! Am I right that this makes the max line length 80, wrapping where necessary? The text of the issue is cut off for me. Would you have a case where this makes the output more readable? Are the line continuation characters handled well? I've had issues with those in the past. IIRC making the docstring into a raw string helps, but we don't do this automatically atm. |
I was intending to have this conversation here (#43) so we can capture the requirements .
Title changed, hope that is more clear.
Regarding your request of an example where this improves clarity: we have a bunch of scrapers taking data from some feeds. Some of those feeds are single-line json. As of today the library would shorten the output with ellipsis
At the moment I did not give much thought to the above issue, this PR is a quite hacky one I created just to start this conversation. |
I'm open to the idea! We'd probably need to make it configurable. Another approach would be for the user to format the output with a modified It's worth testing the line-continuations and seeing whether they make it easier for you — I added something in the readme around how they're handled by doctests — let me know your thoughts. |
On the other hand I was having trouble running Regarding the I propose a |
Yes, if you install poetry and then run Yes, I think a Thanks for checking out the Cheers @AntonioL |
I am investigating this issue right now. Turns out it is a bit more complicated than it looks as the amount of edge cases to handle is a lot. pytest should do more when it comes to reporting the gotten output. Consider this case: def scraper(n : int):
"""
Simple scraper testing.
>>> scraper(1)
"""
# Payload coming from https://httpbin.org/json
return """{
"slideshow": {
"author": "Yours Truly",
"date": "date of publication",
"slides": [
{
"title": "Wake up to WonderWidgets!",
"type": "all"
},
{
"items": [
"Why <em>WonderWidgets</em> are great",
"Who <em>buys</em> WonderWidgets"
],
"title": "Overview",
"type": "all"
}
],
"title": "Sample Slide Show"
}
}""" The above test case fails. pytest-accept rewrites it as: def scraper(n : int):
"""
Simple scraper testing.
>>> scraper(1)
'{\n "slideshow": {\n "author": "Yours Truly", \n "date": "date of publication", \n "slides": [\n {\n "title": "Wake up to WonderWidgets!", \n "type": "all"\n }, \n {\n "items": [\n "Why <em>WonderWidgets</em> are great", \n "Who <em>buys</em> WonderWidgets"\n ], \n "title": "Overview", \n "type": "all"\n }\n ], \n "title": "Sample Slide Show"\n }\n}'
"""
# Payload coming from https://httpbin.org/json
return """{
"slideshow": {
"author": "Yours Truly",
"date": "date of publication",
"slides": [
{
"title": "Wake up to WonderWidgets!",
"type": "all"
},
{
"items": [
"Why <em>WonderWidgets</em> are great",
"Who <em>buys</em> WonderWidgets"
],
"title": "Overview",
"type": "all"
}
],
"title": "Sample Slide Show"
}
}""" This rewriting is not correct as if I try to run doctest-modules I will get /usr/lib/python3.8/doctest.py:939: in find
self._find(tests, obj, name, module, source_lines, globs, {})
../../.cache/pypoetry/virtualenvs/pytest-accept-OGiL4A3W-py3.8/lib/python3.8/site-packages/_pytest/doctest.py:522: in _find
doctest.DocTestFinder._find( # type: ignore
/usr/lib/python3.8/doctest.py:1001: in _find
self._find(tests, val, valname, module, source_lines,
../../.cache/pypoetry/virtualenvs/pytest-accept-OGiL4A3W-py3.8/lib/python3.8/site-packages/_pytest/doctest.py:522: in _find
doctest.DocTestFinder._find( # type: ignore
/usr/lib/python3.8/doctest.py:989: in _find
test = self._get_test(obj, name, module, globs, source_lines)
/usr/lib/python3.8/doctest.py:1073: in _get_test
return self._parser.get_doctest(docstring, globs, name,
/usr/lib/python3.8/doctest.py:675: in get_doctest
return DocTest(self.get_examples(string, name), globs,
/usr/lib/python3.8/doctest.py:689: in get_examples
return [x for x in self.parse(string, name)
/usr/lib/python3.8/doctest.py:651: in parse
self._parse_example(m, name, lineno)
/usr/lib/python3.8/doctest.py:720: in _parse_example
self._check_prefix(want_lines, ' '*indent, name,
/usr/lib/python3.8/doctest.py:805: in _check_prefix
raise ValueError('line %r of the docstring for %s has '
E ValueError: line 6 of the docstring for scraper_test_example.scraper has inconsistent leading whitespace: ' "slideshow": {'
====================================================================== short test summary info =======================================================================
ERROR examples/scraper_test_example.py - ValueError: line 6 of the docstring for scraper_test_example.scraper has inconsistent leading whitespace: ' "slideshow": {'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================================== 1 error in 0.34s ========================================================================== The issue is because of the def scraper(n : int):
"""
Simple scraper testing.
>>> scraper(1)
'{\\n "slideshow": {\\n "author": "Yours Truly", \\n "date": "date of publication", \\n "slides": [\\n {\\n "title": "Wake up to WonderWidgets!", \\n "type": "all"\\n }, \\n {\\n "items": [\\n "Why <em>WonderWidgets</em> are great", \\n "Who <em>buys</em> WonderWidgets"\\n ], \\n "title": "Overview", \\n "type": "all"\\n }\\n ], \\n "title": "Sample Slide Show"\\n }\\n}'
"""
# Payload coming from https://httpbin.org/json
return """{
"slideshow": {
"author": "Yours Truly",
"date": "date of publication",
"slides": [
{
"title": "Wake up to WonderWidgets!",
"type": "all"
},
{
"items": [
"Why <em>WonderWidgets</em> are great",
"Who <em>buys</em> WonderWidgets"
],
"title": "Overview",
"type": "all"
}
],
"title": "Sample Slide Show"
}
}""" Handling those corner cases adds some complexity. We should do this escaping only in case of strings. This would require some kind of parsing of the pytest result. I am not sure if this complexity is worth to be added in pytest-accept. You mention this corner case in the README, your suggested fix is to use rawstrings for the docstring. Why is this related to my idea of newlines patch? Because my approach relied on adding a final "" at the end of each line, this will need the handling of corner cases like the above. |
Great, thanks for looking into this @AntonioL. To respond quickly:
def scraper(n : int):
r"""
Simple scraper testing.
I'd be up for making this better in pytest-accept; I'm asking because I also want to help you in your case. |
Yes it works if that is raw docstring. def scraper(n : int):
r"""
Simple scraper testing.
>>> scraper(1)
'{\n "slideshow": {\n "author": "Yours Truly", \n "date": "date of publication", \n "slides": [\n {\n "title": "Wake up to WonderWidgets!", \n "type": "all"\n }, \n {\n "items": [\n "Why <em>WonderWidgets</em> are great", \n "Who <em>buys</em> WonderWidgets"\n ], \n "title": "Overview", \n "type": "all"\n }\n ], \n "title": "Sample Slide Show"\n }\n}'
"""
# Payload coming from https://httpbin.org/json
return """{
"slideshow": {
"author": "Yours Truly",
"date": "date of publication",
"slides": [
{
"title": "Wake up to WonderWidgets!",
"type": "all"
},
{
"items": [
"Why <em>WonderWidgets</em> are great",
"Who <em>buys</em> WonderWidgets"
],
"title": "Overview",
"type": "all"
}
],
"title": "Sample Slide Show"
}
}""" We would need another property like For the time being I will be best served by using the raw string modified whenever needed. |
OK, yes, I think you're right. Thank you very much @AntonioL . |
Thanks for the library and apologies for the noise in creating this PR, should have done a bit more of research upfront! |
Not at all, thank you @AntonioL . Please continue with any feedback you have on the library! |
Created this PR just to start a conversation. I will create an issue and bring this PR as a proof of concept.