-
Notifications
You must be signed in to change notification settings - Fork 1.1k
PYTHON-5055 - Convert test_client.py to unittest #2074
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 29 commits
2b058a2
45e74da
0296c20
266b0a3
b6baf79
cca705d
ae650e0
8402799
048edf2
84211e7
d3c053c
d2620be
7ee03c9
b9d98c9
41fe61e
47b8c9d
9e115be
8602dde
cc3f1ad
46e7436
d906c3b
8efa6ec
26b1178
9694239
9c3939b
1a643c3
90b4693
0791a8f
3f1a86c
077a1cb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,7 +31,7 @@ set -o xtrace | |
AUTH=${AUTH:-noauth} | ||
SSL=${SSL:-nossl} | ||
TEST_SUITES=${TEST_SUITES:-} | ||
TEST_ARGS="${*:1}" | ||
TEST_ARGS=("${*:1}") | ||
|
||
export PIP_QUIET=1 # Quiet by default | ||
export PIP_PREFER_BINARY=1 # Prefer binary dists by default | ||
|
@@ -206,6 +206,7 @@ if [ -n "$TEST_INDEX_MANAGEMENT" ]; then | |
TEST_SUITES="index_management" | ||
fi | ||
|
||
# shellcheck disable=SC2128 | ||
if [ -n "$TEST_DATA_LAKE" ] && [ -z "$TEST_ARGS" ]; then | ||
TEST_SUITES="data_lake" | ||
fi | ||
|
@@ -235,7 +236,7 @@ if [ -n "$PERF_TEST" ]; then | |
TEST_SUITES="perf" | ||
# PYTHON-4769 Run perf_test.py directly otherwise pytest's test collection negatively | ||
# affects the benchmark results. | ||
TEST_ARGS="test/performance/perf_test.py $TEST_ARGS" | ||
TEST_ARGS+=("test/performance/perf_test.py") | ||
fi | ||
|
||
echo "Running $AUTH tests over $SSL with python $(uv python find)" | ||
|
@@ -251,7 +252,7 @@ if [ -n "$COVERAGE" ] && [ "$PYTHON_IMPL" = "CPython" ]; then | |
# Keep in sync with combine-coverage.sh. | ||
# coverage >=5 is needed for relative_files=true. | ||
UV_ARGS+=("--group coverage") | ||
TEST_ARGS="$TEST_ARGS --cov" | ||
TEST_ARGS+=("--cov") | ||
fi | ||
|
||
if [ -n "$GREEN_FRAMEWORK" ]; then | ||
|
@@ -265,15 +266,39 @@ PIP_QUIET=0 uv run ${UV_ARGS[*]} --with pip pip list | |
if [ -z "$GREEN_FRAMEWORK" ]; then | ||
# Use --capture=tee-sys so pytest prints test output inline: | ||
# https://docs.pytest.org/en/stable/how-to/capture-stdout-stderr.html | ||
PYTEST_ARGS="-v --capture=tee-sys --durations=5 $TEST_ARGS" | ||
PYTEST_ARGS=("-v" "--capture=tee-sys" "--durations=5" "${TEST_ARGS[@]}") | ||
if [ -n "$TEST_SUITES" ]; then | ||
PYTEST_ARGS="-m $TEST_SUITES $PYTEST_ARGS" | ||
# Workaround until unittest -> pytest conversion is complete | ||
if [[ "$TEST_SUITES" == *"default_async"* ]]; then | ||
ASYNC_PYTEST_ARGS=("-m asyncio" "--junitxml=xunit-results/TEST-asyncresults.xml" "${PYTEST_ARGS[@]}") | ||
else | ||
ASYNC_PYTEST_ARGS=("-m asyncio and $TEST_SUITES" "--junitxml=xunit-results/TEST-asyncresults.xml" "${PYTEST_ARGS[@]}") | ||
fi | ||
PYTEST_ARGS=("-m $TEST_SUITES and not asyncio" "${PYTEST_ARGS[@]}") | ||
else | ||
ASYNC_PYTEST_ARGS=("-m asyncio" "--junitxml=xunit-results/TEST-asyncresults.xml" "${PYTEST_ARGS[@]}") | ||
fi | ||
# Workaround until unittest -> pytest conversion is complete | ||
set +o errexit | ||
# shellcheck disable=SC2048 | ||
uv run ${UV_ARGS[*]} pytest "${PYTEST_ARGS[@]}" | ||
exit_code=$? | ||
|
||
# shellcheck disable=SC2048 | ||
uv run ${UV_ARGS[*]} pytest $PYTEST_ARGS | ||
uv run ${UV_ARGS[*]} pytest "${ASYNC_PYTEST_ARGS[@]}" "--collect-only" | ||
collected=$? | ||
set -o errexit | ||
# If we collected at least one async test, run all collected tests | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What's the benefit using the "collect then run" approach? Wouldn't it be simpler to run pytest even if all tests are skipped? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point. My original intention was to avoid the error-handling that would require, but this approach doesn't avoid that either. |
||
if [ $collected -ne 5 ]; then | ||
# shellcheck disable=SC2048 | ||
uv run ${UV_ARGS[*]} pytest "${ASYNC_PYTEST_ARGS[@]}" | ||
fi | ||
if [ $exit_code -ne 0 ]; then | ||
exit $exit_code | ||
fi | ||
else | ||
# shellcheck disable=SC2048 | ||
uv run ${UV_ARGS[*]} green_framework_test.py $GREEN_FRAMEWORK -v $TEST_ARGS | ||
uv run ${UV_ARGS[*]} green_framework_test.py $GREEN_FRAMEWORK -v "${TEST_ARGS[@]}" | ||
fi | ||
|
||
# Handle perf test post actions. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about coverage? Does that pick up both test runs automatically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The coverage runs pick up both test cases: https://spruce.mongodb.com/task/mongo_python_driver_test_rhel8_python3.9_cov_test_8.0_replica_set_noauth_ssl_sync_async_patch_dc182310dabff470dbbfc7da3c09eb6a4e08dfed_6797d0671d810000071bc171_25_01_27_18_28_57?execution=0&sortBy=STATUS&sortDir=ASC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shows it's not working as expected. The second coverage run will overwrite the
.coverage
output file by default: https://coverage.readthedocs.io/en/7.6.10/cmd.html#data-fileUh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, does it make sense to use
--append
on the second run to keep the single file?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative would be to use a separate file with
COVERAGE_FILE
and thencoverage combine
the two.