Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pytest framework for tests #1463

Closed
jaraco opened this issue Jul 25, 2016 · 7 comments
Closed

Use pytest framework for tests #1463

jaraco opened this issue Jul 25, 2016 · 7 comments
Projects

Comments

@jaraco
Copy link
Member

jaraco commented Jul 25, 2016

CherryPy currently relies on the nose framework for running tests. I suspect it has mostly small reliance on the specifics of nose, so it should be a fairly straightforward change to support using pytest as the runner.

There are many reasons the project should switch to pytest over nose:

  • Pytest has better failure reporting, including assert rewrites. With pytest, if something fails, it reports the traceback, and the variables in the scope, and what values were present that failed an assertion.
  • Pytest has assert rewrites - so that simple asserts can be easily assessed.
  • Pytest does nice diffs of lists and dicts, masking the common elements and quickly getting to the differences that caused the failure.
  • Pytest has nicer semantics for selecting and deselecting tests.
  • Pytest has better configurability around test collection.
  • Pytest has a bigger suite of plugins, better community support, and more rapid issue resolution.
  • Pytest has built in support for pluggable fixtures and monkeypatching (mocks).

I realize some of these assertions I've made are subjective, but in my experience, py.test is superior in almost every way. It's a constant frustration to me that I can't use some of the powerful features of pytest.

I do love nose for its namesake, and the minimal output during test runs is nice, but those benefits pale in comparison to those I'm missing above.

Are there any objections to dropping nose and adopting pytest as the test framework?

@webknjaz
Copy link
Member

Sounds reasonable. I've heard lots of good feedbacks about py.test as well.

@coady
Copy link
Contributor

coady commented Aug 21, 2016

Re "minimal output during test runs is nice": py.test's verbosity options can be set in setup.cfg as well.

@jaraco
Copy link
Member Author

jaraco commented Dec 23, 2016

For some stupid reason, Travis is putting the 3.5 test in the allowed failures: https://travis-ci.org/cherrypy/cherrypy/builds/186426365

But it's not doing that on master. What's weird is there's no difference in the matrix definition between the pytest branch and the master branch, the latter of which runs Python 3.5 tests normally.

@jaraco
Copy link
Member Author

jaraco commented Dec 23, 2016

Well, 96d9db9 seemed to fix that.

@jaraco
Copy link
Member Author

jaraco commented Dec 23, 2016

After restarting a couple of the jobs showing spurious failures, I have a clean build of CherryPy on pytest.

I'm going to merge this into master now, and file tickets for the individual test failures as needed.

@jaraco jaraco closed this as completed in 48e46a8 Dec 23, 2016
@webknjaz
Copy link
Member

@jaraco You removed Python 3.2 from Travis CI config.
It seems we should completely wipe 3.1/3.2 from classifiers in setup.py as well as from officially supported versions everywhere.
Do you mind?

@jaraco
Copy link
Member Author

jaraco commented Dec 24, 2016

The latest release officially supports Python 3.2, even though the tests don't run properly, due primarily to the issue with tox. I plan to drop support for Python 3.2, but I'll wait to see what happens with tox-dev/tox#428 first. If tox is willing to support Python 3.2, CherryPy can too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

3 participants