This is a step toward #1673 (switching entirely to pytest from myunit and runtests.py),
using some of the ideas developed in @kirbyfan64's PR #1723.
Both py.test with no arguments and py.test mypy/test/testcheck.py work just as you'd
hope, while ./runtests.py continues to run all the tests.
The output is very similar to the myunit output. It doesn't spew voluminous stack traces or
other verbose data, and it continues using assert_string_arrays_equal to produce
nicely-formatted comparisons when e.g. type-checker error messages differ. On error it
even includes the filename and line number for the test case itself, which I've long wanted,
and I think pytest's separator lines and coloration make the output slightly easier to read
when there are multiple failures.
The -i option from myunit is straightforwardly ported over as --update-data, giving it
a longer name because it feels like the kind of heavyweight and uncommon operation
that deserves such a name. It'd be equally straightforward to port over -u, but in the
presence of source control I think --update-data does the job on its own.
One small annoyance is that if there's a failure early in a test run, pytest doesn't print the
detailed report on that failure until the whole run is over. This has often annoyed me in
using pytest on other projects; useful workarounds include passing -x to make it stop
at the first failure, or -k to filter the set of tests to be run. For interactive use I think it'd
nearly always be preferable to do what myunit does by immediately printing the detailed
information, so I may come back to this later to try to get pytest to do that.
We don't yet take advantage of xdist to parallelize within a py.test run (though xdist
works for me out of the box in initial cursory testing.) For now we just stick with the
runtests.py parallelization, so we set up a separate py.test command for each
@JukkaL, I'd appreciate your giving this a spin and/or reviewing the code, given the helpful thoughts you had on #1723.
@kirbyfan64, I'd also appreciate your review, particularly given the time you've spent already thinking about this problem! Hopefully this illustrates in code some of my suggestions in #1723 about how to use the pytest API. I think there's plenty more to do to realize the dream of pytest all the way (#1673), and I'd be glad to see followup PRs from you.
@gnprice I gave it a quick spin and it looks good! Thanks for working on this (also thank you @kirbyfan64 for getting this started!). I noticed one thing -- runtests.py doesn't report the number of test cases for pytest tests. It should be pretty easy to fix by updating parse_test_stats_from_output in mypy/waiter.py.
It's possible to get test failures immediately (though without stack traces) by using the pytest -s option.
Thanks for the review! I'll make that change and then land this -- can always make further changes later.
py.test -s is a good thought as a useful workaround for not getting failure details immediately -- I might use that sometime. Still, it isn't meant to solve that problem and it doesn't solve it well. It lets stdout and stderr pass through, which because assert_string_arrays_equal sends its output straight to stderr means you get that data immediately. But you don't get any context, including which test is failing, and the formatting is also messed up by being thrown right in the middle of the ./s/F/... progress line. So I'd still like to have a proper solution, and I may go back and attempt one later.
wip pytest for testcheck -- sort of mostly works!
fix setup/teardown -- now all seems to work!
customize output to be a bit more like myunit
take more control of error reporting
print traceback, except on SystemExit
wip implement -i as --update-data
revert accidentally-committed test change to test data
tweaks for polish
wire up into runtests
and update README
explain type: ignore
parse pytest test counts in runtests