Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to pytest #72

Closed
amueller opened this issue Sep 10, 2020 · 6 comments
Closed

Move to pytest #72

amueller opened this issue Sep 10, 2020 · 6 comments

Comments

@amueller
Copy link
Contributor

we should use pytest as test runner

@bpkroth
Copy link
Contributor

bpkroth commented Sep 10, 2020

For the record, one of the goals here is to be able to get some better stats on how long tests take.
See Also #67

@byte-sculptor
Copy link
Contributor

I'm still educating myself about the relative merits of these two frameworks. What features of pytest particularly take your fancy? I know you mentioned parameterizable tests in the past...

@amueller
Copy link
Contributor Author

amueller commented Sep 15, 2020

parametrizable tests, better error messages, plugins for coverage and some other plugins.

pytest is a bit magic, it parses the AST to figure out a way to give you a good error, so if you write

b = 5
assert b == 6

It tells you "assert failed 5 != 6", while unittest or just plain python would say "AssertionError(False)".

I don't recall seeing a package not using pytest recently. There used to be nosetests, but that is no longer maintained and most packages moved from nosetests to pytest.
Examples of packages that use pytest are numpy, pandas, sklearn, requests, https://github.com/microsoft/hummingbird, https://github.com/microsoft/DeepSpeed, https://github.com/microsoft/coax, ...

@bpkroth
Copy link
Contributor

bpkroth commented Sep 15, 2020

The other reason we were discussing this was in the course of troubleshooting #67 and trying to figure out which tests were taking a long time.

I had written this nasty oneliner to try and identify the seconds elapsed that each test took:

last_ts=''; ./scripts/run-python-tests.sh | logger -s -t python-test 2>&1 | grep 'python-test: test_' | while read line; do echo "$line"; ts=$(echo "$line" | egrep -o '^<13>Sep 10 [0-9:]+ ' | sed 's/^<13>//' | xargs -I{} date -d"{}" +%s); if [ -n "$last_ts" ]; then let time_took=ts-last_ts; echo $time_took; fi; last_ts=$ts; done | egrep -B1 '^[1-9][0-9]+'
<13>Sep 10 21:01:39 python-test: test_random_search_optimizer (mlos.Optimizers.ExperimentDesigner.NumericOptimizers.unit_tests.TestRandomSearchOptimizer.TestRandomSearchOptimizer) ... ok
85
--
<13>Sep 10 21:01:50 python-test: test_random_function_configs (mlos.Optimizers.ExperimentDesigner.UtilityFunctions.unit_tests.TestConfidenceBoundUtilityFunction.TestConfidenceBoundUtilityFunction) ... ok
11
--
<13>Sep 10 21:02:04 python-test: test_default_homogeneous_random_forest_model (mlos.Optimizers.RegressionModels.unit_tests.TestHomogeneousRandomForestRegressionModel.TestHomogeneousRandomForestRegressionModel) ...
14
--
<13>Sep 10 21:02:52 python-test: test_hierarchical_quadratic_cold_start (mlos.Optimizers.unit_tests.TestBayesianOptimizer.TestBayesianOptimizer) ...  9881.271554996536
12
<13>Sep 10 21:04:00 python-test: test_hierarchical_quadratic_cold_start_random_configs (mlos.Optimizers.unit_tests.TestBayesianOptimizer.TestBayesianOptimizer) ...
68
<13>Sep 10 21:04:15 python-test: test_translating_dataframe_from_categorical_hierarchical_to_discrete_flat_hypergrid (mlos.Spaces.HypergridAdapters.unit_tests.TestCategoricalToDiscreteHypergridAdapter.TestCategoricalToDiscreteHypergridAdapter) ... ok
15

However, @amueller pointed out that pytest just gives us that with a CLI option.

@byte-sculptor
Copy link
Contributor

That sounds really neat. What does it take to migrate?

@amueller
Copy link
Contributor Author

we need a pytest.ini to store the pattern that matches our test, and then we can replace the command in CI / the test scripts, both of which should be quite easy, but we need to make sure we actually run all the tests.

Then we can start using some of the fancier features gradually if we like to, but really there's nothing else we need to do. Happy to make a PR in a bit.

@bpkroth bpkroth closed this as completed Mar 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants