[tests] Looking to fix hanging ci build due to pytest#1218
[tests] Looking to fix hanging ci build due to pytest#1218nemesifier merged 4 commits intomasterfrom
Conversation
WalkthroughThis PR refactors the WebSocket consumer's _is_user_authenticated to return the user's authenticated state without closing the connection internally, preserving connect-level semantics that still close on failure. It adds pytest timeout and asyncio strict/debug settings to pytest.ini, installs pytest-timeout in test requirements, fixes minor indentation in a test file, and modifies the runtests script to run a dedicated Selenium test phase with adjusted flags. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
🔇 Additional comments (2)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
72f9287 to
f38dbfa
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@pytest.ini`:
- Line 2: The global test timeout in pytest.ini (addopts) is set to an
aggressive --timeout=5 which will cause false failures on slower CI runners;
update the addopts entry in pytest.ini to a safer default (e.g., replace
--timeout=5 with --timeout=30) or remove the global --timeout flag and instead
mark slow tests individually using `@pytest.mark.timeout`(...) on specific test
functions to avoid killing legitimate slower tests while retaining timeout
protection for hung tests.
- Around line 7-9: Add the missing pytest-asyncio test dependency so tests using
`@pytest.mark.asyncio` and pytest.ini settings (asyncio_mode = strict,
asyncio_debug = true, log_cli = true) run in CI; update the test requirements
file (requirements-test.txt) to include "pytest-asyncio" (pin a compatible
version if your project pins others) and ensure any CI test environment installs
that requirements file before running pytest.
In `@runtests`:
- Around line 17-18: Restore the commented coverage-export commands so coverage
data from the parallel test runs is merged and exported: re-enable "coverage
combine" and "coverage xml" (and any other commented blocks related to coverage
export) in the runtests script so the --parallel runs are combined into a
Cobertura XML for the CI "Upload Coverage" step; ensure the restored commands
run after both parallel test invocations and produce the expected XML artifact
for the Upload Coverage job.
🧹 Nitpick comments (3)
.github/workflows/ci.yml (2)
25-35: Ensure commented-out matrix entries and excludes are restored before merging.Since this is a draft PR for debugging CI hangs, just a reminder that the removed Python 3.11/3.13 versions and the
excludeblock should be uncommented before this lands onmaster, otherwise CI coverage would silently shrink.
80-81: QA checks are disabled — remember to re-enable.Same as above: the commented-out QA step should be restored before merge to avoid skipping lint/formatting checks on
master.runtests (1)
8-9: Selenium tests disabled — restore before merge.Same as the other temporarily-disabled blocks in this PR.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/ci.ymlopenwisp_controller/config/base/channels_consumer.pyopenwisp_controller/connection/tests/pytest.pypytest.inirequirements-test.txtruntestsruntests.py
💤 Files with no reviewable changes (1)
- runtests.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-15T15:05:49.557Z
Learnt from: DragnEmperor
Repo: openwisp/openwisp-controller PR: 1175
File: openwisp_controller/config/management/commands/clear_last_ip.py:38-42
Timestamp: 2026-01-15T15:05:49.557Z
Learning: In Django projects, when using select_related() to traverse relations (for example, select_related("organization__config_settings")), the traversed relation must not be deferred. If you also use .only() in the same query, include the relation name or FK field (e.g., "organization" or "organization_id") in the .only() list to avoid the error "Field X cannot be both deferred and traversed using select_related at the same time." Apply this guideline to Django code in openwisp_controller/config/management/commands/clear_last_ip.py and similar modules by ensuring any select_related with an accompanying only() includes the related field names to prevent deferred/traversed conflicts.
Applied to files:
openwisp_controller/config/base/channels_consumer.pyopenwisp_controller/connection/tests/pytest.py
📚 Learning: 2026-01-15T15:07:17.354Z
Learnt from: DragnEmperor
Repo: openwisp/openwisp-controller PR: 1175
File: openwisp_controller/geo/estimated_location/tests/tests.py:172-175
Timestamp: 2026-01-15T15:07:17.354Z
Learning: In this repository, flake8 enforces E501 (line too long) via setup.cfg (max-line-length = 88) while ruff ignores E501 via ruff.toml. Therefore, use '# noqa: E501' on lines that intentionally exceed 88 characters to satisfy flake8 without affecting ruff checks. This applies to Python files across the project (any .py) and is relevant for tests as well. Use sparingly and only where breaking lines is not feasible without hurting readability or functionality.
Applied to files:
openwisp_controller/config/base/channels_consumer.pyopenwisp_controller/connection/tests/pytest.py
🧬 Code graph analysis (1)
openwisp_controller/connection/tests/pytest.py (3)
openwisp_controller/config/admin.py (2)
save(403-416)save(1261-1270)openwisp_controller/config/base/device.py (1)
save(270-288)openwisp_controller/connection/base/models.py (3)
save(127-134)save(375-380)save(509-520)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Python==3.10 | django~=4.2.0
- GitHub Check: Python==3.12 | django~=4.2.0
- GitHub Check: Python==3.12 | django~=5.1.0
- GitHub Check: Python==3.10 | django~=5.1.0
- GitHub Check: Python==3.12 | django~=5.2.0
- GitHub Check: Python==3.10 | django~=5.2.0
🔇 Additional comments (3)
requirements-test.txt (1)
2-2: LGTM —pytest-timeoutaddition is consistent with the new--timeout=5inpytest.ini.openwisp_controller/connection/tests/pytest.py (1)
47-53: Good fix — DB operations now execute within the SSH mock context.Moving
command.save()andcommand.refresh_from_db()inside themock.patch("paramiko.SSHClient.exec_command")block ensures the background command execution triggered bysave()→_schedule_command()runs against the mock rather than a real SSH client, which is a likely root cause of the CI hang.openwisp_controller/config/base/channels_consumer.py (1)
12-13: Good simplification — removes the redundantself.close()call.The caller
connect()(line 38-39) already closes the connection on auth/authz failure, so having_is_user_authenticatedalso close was a double-close that could contribute to hangs or unexpected behavior in the websocket lifecycle.Minor nit:
self.scope["user"].is_authenticatedalready returns aboolin modern Django, sois Trueis redundant — a plainreturn self.scope["user"].is_authenticatedwould suffice. Not blocking.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| @@ -1,6 +1,9 @@ | |||
| [pytest] | |||
| addopts = -p no:warnings --create-db --reuse-db --nomigrations | |||
| addopts = -p no:warnings --create-db --reuse-db --nomigrations --timeout=5 | |||
There was a problem hiding this comment.
5-second timeout is very aggressive for production CI.
A global --timeout=5 will kill any test that takes longer than 5 seconds, which is fine for hunting down hangs in a draft, but will likely cause false failures on slower CI runners or for tests involving real Redis, DB transactions, or websocket setup/teardown. Consider bumping this (e.g., --timeout=30) or using per-test @pytest.mark.timeout(...) for the known-slow tests before merging.
🤖 Prompt for AI Agents
In `@pytest.ini` at line 2, The global test timeout in pytest.ini (addopts) is set
to an aggressive --timeout=5 which will cause false failures on slower CI
runners; update the addopts entry in pytest.ini to a safer default (e.g.,
replace --timeout=5 with --timeout=30) or remove the global --timeout flag and
instead mark slow tests individually using `@pytest.mark.timeout`(...) on specific
test functions to avoid killing legitimate slower tests while retaining timeout
protection for hung tests.
There was a problem hiding this comment.
@nemesifier, understood! I'll leave the timeout as is. 👍
pytest.ini
Outdated
| log_cli = true | ||
| asyncio_mode = strict | ||
| asyncio_debug = true No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, explore the repository structure to find dependency files
git ls-files | grep -E '(requirements|setup|pyproject)' | head -20Repository: openwisp/openwisp-controller
Length of output: 144
🏁 Script executed:
# Search for pytest-asyncio mentions across the repository
rg 'pytest-asyncio' --type py --type txt --type toml --type cfg -nRepository: openwisp/openwisp-controller
Length of output: 99
🏁 Script executed:
# Check the contents of requirements files
find . -name 'requirements*.txt' -o -name 'setup.py' -o -name 'setup.cfg' -o -name 'pyproject.toml' | head -10Repository: openwisp/openwisp-controller
Length of output: 154
🏁 Script executed:
# Check requirements-test.txt
cat -n requirements-test.txtRepository: openwisp/openwisp-controller
Length of output: 331
🏁 Script executed:
# Check pyproject.toml
cat -n pyproject.tomlRepository: openwisp/openwisp-controller
Length of output: 734
🏁 Script executed:
# Check setup.py for test dependencies
cat -n setup.pyRepository: openwisp/openwisp-controller
Length of output: 2127
🏁 Script executed:
# Let's check if openwisp-utils repository is documented or available
# Search for any reference to pytest-asyncio in the codebase
rg 'asyncio' --type py --type txt -i -nRepository: openwisp/openwisp-controller
Length of output: 277
🌐 Web query:
openwisp-utils channels-test pytest-asyncio dependency
💡 Result:
openwisp-utils does not list a dedicated channels-test extra (the documented extras are things like qa, rest, selenium, while test deps are installed separately via requirements-test.txt). [1]
If you’re running Django Channels tests with pytest, you must add pytest-asyncio (and typically pytest-django) because Channels’ async test helpers require an async-capable pytest plugin. [2]
Practical fix: add pytest-asyncio to your test requirements (e.g., requirements-test.txt) in the environment where you run Channels-based tests. [1][2]
Sources: [1] OpenWISP Utils developer installation/test deps, [2] Django Channels testing docs (pytest requires pytest-asyncio).
Add pytest-asyncio to test dependencies.
The pytest.ini settings asyncio_mode = strict and asyncio_debug = true require pytest-asyncio, which is not currently listed in requirements-test.txt. The assumption that it comes transitively from openwisp-utils[channels-test] is incorrect—that extra does not include pytest-asyncio. Since the codebase already uses @pytest.mark.asyncio decorators in tests, pytest-asyncio must be explicitly added to prevent CI failures.
🤖 Prompt for AI Agents
In `@pytest.ini` around lines 7 - 9, Add the missing pytest-asyncio test
dependency so tests using `@pytest.mark.asyncio` and pytest.ini settings
(asyncio_mode = strict, asyncio_debug = true, log_cli = true) run in CI; update
the test requirements file (requirements-test.txt) to include "pytest-asyncio"
(pin a compatible version if your project pins others) and ensure any CI test
environment installs that requirements file before running pytest.
runtests
Outdated
| # coverage combine | ||
| # coverage xml |
There was a problem hiding this comment.
Coverage reporting will be broken with coverage combine and coverage xml commented out.
The CI workflow's "Upload Coverage" step expects a Cobertura XML file. With these lines disabled, --parallel coverage data from the two test runs (lines 5–6 and 12–15) won't be merged or exported, so the coveralls upload will fail or report incomplete data. Fine for a draft, but these must be restored alongside the other commented-out blocks.
🤖 Prompt for AI Agents
In `@runtests` around lines 17 - 18, Restore the commented coverage-export
commands so coverage data from the parallel test runs is merged and exported:
re-enable "coverage combine" and "coverage xml" (and any other commented blocks
related to coverage export) in the runtests script so the --parallel runs are
combined into a Cobertura XML for the CI "Upload Coverage" step; ensure the
restored commands run after both parallel test invocations and produce the
expected XML artifact for the Upload Coverage job.
|
@coderabbitai do not review |
|
Tip For best results, initiate chat on the files or code changes.
|
f38dbfa to
2177fea
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Alternative approach to #1217.
@coderabbitai please do not review.