New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pytest framework for tests #1463

Closed
jaraco opened this Issue Jul 25, 2016 · 7 comments

Comments

3 participants
@jaraco
Member

jaraco commented Jul 25, 2016

CherryPy currently relies on the nose framework for running tests. I suspect it has mostly small reliance on the specifics of nose, so it should be a fairly straightforward change to support using pytest as the runner.

There are many reasons the project should switch to pytest over nose:

  • Pytest has better failure reporting, including assert rewrites. With pytest, if something fails, it reports the traceback, and the variables in the scope, and what values were present that failed an assertion.
  • Pytest has assert rewrites - so that simple asserts can be easily assessed.
  • Pytest does nice diffs of lists and dicts, masking the common elements and quickly getting to the differences that caused the failure.
  • Pytest has nicer semantics for selecting and deselecting tests.
  • Pytest has better configurability around test collection.
  • Pytest has a bigger suite of plugins, better community support, and more rapid issue resolution.
  • Pytest has built in support for pluggable fixtures and monkeypatching (mocks).

I realize some of these assertions I've made are subjective, but in my experience, py.test is superior in almost every way. It's a constant frustration to me that I can't use some of the powerful features of pytest.

I do love nose for its namesake, and the minimal output during test runs is nice, but those benefits pale in comparison to those I'm missing above.

Are there any objections to dropping nose and adopting pytest as the test framework?

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz Jul 25, 2016

Member

Sounds reasonable. I've heard lots of good feedbacks about py.test as well.

Member

webknjaz commented Jul 25, 2016

Sounds reasonable. I've heard lots of good feedbacks about py.test as well.

@coady

This comment has been minimized.

Show comment
Hide comment
@coady

coady Aug 21, 2016

Contributor

Re "minimal output during test runs is nice": py.test's verbosity options can be set in setup.cfg as well.

Contributor

coady commented Aug 21, 2016

Re "minimal output during test runs is nice": py.test's verbosity options can be set in setup.cfg as well.

@jaraco

This comment has been minimized.

Show comment
Hide comment
@jaraco

jaraco Dec 23, 2016

Member

For some stupid reason, Travis is putting the 3.5 test in the allowed failures: https://travis-ci.org/cherrypy/cherrypy/builds/186426365

But it's not doing that on master. What's weird is there's no difference in the matrix definition between the pytest branch and the master branch, the latter of which runs Python 3.5 tests normally.

Member

jaraco commented Dec 23, 2016

For some stupid reason, Travis is putting the 3.5 test in the allowed failures: https://travis-ci.org/cherrypy/cherrypy/builds/186426365

But it's not doing that on master. What's weird is there's no difference in the matrix definition between the pytest branch and the master branch, the latter of which runs Python 3.5 tests normally.

@jaraco

This comment has been minimized.

Show comment
Hide comment
@jaraco

jaraco Dec 23, 2016

Member

Well, 96d9db9 seemed to fix that.

Member

jaraco commented Dec 23, 2016

Well, 96d9db9 seemed to fix that.

@jaraco

This comment has been minimized.

Show comment
Hide comment
@jaraco

jaraco Dec 23, 2016

Member

After restarting a couple of the jobs showing spurious failures, I have a clean build of CherryPy on pytest.

I'm going to merge this into master now, and file tickets for the individual test failures as needed.

Member

jaraco commented Dec 23, 2016

After restarting a couple of the jobs showing spurious failures, I have a clean build of CherryPy on pytest.

I'm going to merge this into master now, and file tickets for the individual test failures as needed.

@jaraco jaraco closed this in 48e46a8 Dec 23, 2016

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz Dec 24, 2016

Member

@jaraco You removed Python 3.2 from Travis CI config.
It seems we should completely wipe 3.1/3.2 from classifiers in setup.py as well as from officially supported versions everywhere.
Do you mind?

Member

webknjaz commented Dec 24, 2016

@jaraco You removed Python 3.2 from Travis CI config.
It seems we should completely wipe 3.1/3.2 from classifiers in setup.py as well as from officially supported versions everywhere.
Do you mind?

@jaraco

This comment has been minimized.

Show comment
Hide comment
@jaraco

jaraco Dec 24, 2016

Member

The latest release officially supports Python 3.2, even though the tests don't run properly, due primarily to the issue with tox. I plan to drop support for Python 3.2, but I'll wait to see what happens with tox-dev/tox#428 first. If tox is willing to support Python 3.2, CherryPy can too.

Member

jaraco commented Dec 24, 2016

The latest release officially supports Python 3.2, even though the tests don't run properly, due primarily to the issue with tox. I plan to drop support for Python 3.2, but I'll wait to see what happens with tox-dev/tox#428 first. If tox is willing to support Python 3.2, CherryPy can too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment