Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test suite upgrade #354

Open
2 tasks
jaberg opened this issue Jan 17, 2018 · 0 comments
Open
2 tasks

Test suite upgrade #354

jaberg opened this issue Jan 17, 2018 · 0 comments

Comments

@jaberg
Copy link
Contributor

jaberg commented Jan 17, 2018

Thanks @aldenor for an overview of what we can do to make tests better. The main thrust of it seems to be:

  • switch from nose to pytest
  • move tests out of source tree

@aldenor wrote (#350):

  • First and foremost, please don't test in-source-tree. E.g., you would never catch packaging errors this way. Thus, ideally, the tests should be moved to a /tests/ folder which would not have an init.py in it. In-tree tests could still be easily run via PYTHONPATH=. if need be.
  • Drop hyperopt's dependency on nose. There's no reason to install it if you're just using the package.
  • Set up automated tests on Travis (Linux / OS X) and AppVeyor (Windows). Could possibly use tox as well (on Travis) to simplify testing across intepreter versions, both locally and on CI. On Windows though, you'll have to do it manually since Python is normally installed via conda which is not compatible with venv.
  • If using tox for testing, all additional test dependencies should be listed there (e.g. pytest, matplotlib, mongo, etc).
  • Optionally, set up automated coverage tracking, e.g. on codecov.

To the tests themselves:

  • Would you consider switching to pytest? It's arguably a better test framework than nose with a better test runner and an ecosystem of plugins. I could help with migration if need be, once the test failures on master are fixed. Main pros: shared fixtures, fixture parametrization, less boilerplate, expression expansion on failures, native support for exact/approximate matching for numpy arrays (in the most recent versions).
  • Would it make sense to have some property-checked tests? E.g., via hypothesis. Main pros: failing case shrinkage.
  • Ideally, test files should not import each other (e.g. stuff from test_domains being imported in other test_files). The fixtures could be provided in the form of proper (possibly parametrized) fixtures; other shared stuff could also be exposed via conftest.
    matplotlib tests may fail if DISPLAY is not set; runnning plotting tests opens a new plotting window which is not nice. Plotting tests should configure matplotib backend to avoid either of the above problems.
  • There's test failures noted in Test failures on master? #315, most of which are due to hyperopt not catching up with the latest NumPy conventions.
  • I've observed other test failures on different platforms as well; here's the list: test_basic*, test_mu_is_used*, test_cdf*, test_pdf_logpdf*, test_random*, TestExperimentWithThreads*, test_plot*, test_q1lognormal* (exact failures can be reproduced if needed by re-enabling these tests in Add hyperopt recipe conda-forge/staged-recipes#4710 and re-running the builds on all platforms).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant