[tests] Fixed runtests.py to execute all tests, removed broken tests#581
[tests] Fixed runtests.py to execute all tests, removed broken tests#581nemesifier merged 1 commit intomasterfrom
Conversation
📝 WalkthroughWalkthroughRemoved several Selenium-based tests from tests/runtests.py (test_topology_graph, test_create_prefix_users, test_websocket_marker). Adjusted test_console_errors by removing one URL and its associated mobile-location setup. Modified test_forgot_password to call self.logout() before requesting a password reset. Updated test_celery expected registered tasks (replaced three specific auto-create monitoring tasks with openwisp_monitoring.check.tasks.auto_create_check and removed select notification/monitoring tasks). Simplified the test runner entrypoint to use unittest.main(verbosity=2) instead of manual TestSuite construction. Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/runtests.py`:
- Around line 230-235: The loop over self.base_driver.get_cookies() should use a
singular loop variable (cookie) and must explicitly handle the case where no
"sessionid" is found: iterate through cookies (cookie) to build reqHeader when
cookie["name"] == "sessionid", then before calling
request.Request(prefix_pdf_file_path, headers=reqHeader) check if reqHeader is
empty and raise a clear error or fail the test (e.g., RuntimeError or
AssertionError) that includes context (e.g., "sessionid cookie not found" and
the list of cookies) so tests don't silently proceed unauthenticated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a40e09a8-b88a-4c07-9e3b-87b12bba80ee
📒 Files selected for processing (1)
tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: CI Build
🔇 Additional comments (2)
tests/runtests.py (2)
491-492: LGTM!Using
unittest.main(verbosity=2)ensures automatic discovery and execution of allTestCaseclasses in the module, which directly addresses the PR objective. This is cleaner and more maintainable than explicitTestSuiteconstruction.
367-367: The Celery task consolidation from three separate auto-create tasks toauto_create_checkis correct and expected for OpenWISP Monitoring 1.2.x, as confirmed by the project changelog and current Dockerfile configuration. The codebase is properly aligned with this library version, and the test at line 367 correctly reflects the consolidated task name.
tests/runtests.py
Outdated
| reqHeader = {} | ||
| for cookies in self.base_driver.get_cookies(): | ||
| if cookies["name"] == "sessionid": | ||
| reqHeader = {"Cookie": f"sessionid={cookies['value']}"} | ||
| break | ||
| curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider explicit handling when sessionid cookie is not found.
The fix correctly avoids assuming cookie order, but if no sessionid cookie exists, reqHeader remains empty and the request proceeds without authentication. This could lead to confusing test failures (e.g., 401/403) without indicating the actual root cause.
Also, the loop variable cookies should be cookie (singular) since each iteration yields a single cookie dict.
Proposed improvement
reqHeader = {}
- for cookies in self.base_driver.get_cookies():
- if cookies["name"] == "sessionid":
- reqHeader = {"Cookie": f"sessionid={cookies['value']}"}
+ for cookie in self.base_driver.get_cookies():
+ if cookie["name"] == "sessionid":
+ reqHeader = {"Cookie": f"sessionid={cookie['value']}"}
break
+ else:
+ self.fail("sessionid cookie not found in browser cookies")
curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| reqHeader = {} | |
| for cookies in self.base_driver.get_cookies(): | |
| if cookies["name"] == "sessionid": | |
| reqHeader = {"Cookie": f"sessionid={cookies['value']}"} | |
| break | |
| curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader) | |
| reqHeader = {} | |
| for cookie in self.base_driver.get_cookies(): | |
| if cookie["name"] == "sessionid": | |
| reqHeader = {"Cookie": f"sessionid={cookie['value']}"} | |
| break | |
| else: | |
| self.fail("sessionid cookie not found in browser cookies") | |
| curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/runtests.py` around lines 230 - 235, The loop over
self.base_driver.get_cookies() should use a singular loop variable (cookie) and
must explicitly handle the case where no "sessionid" is found: iterate through
cookies (cookie) to build reqHeader when cookie["name"] == "sessionid", then
before calling request.Request(prefix_pdf_file_path, headers=reqHeader) check if
reqHeader is empty and raise a clear error or fail the test (e.g., RuntimeError
or AssertionError) that includes context (e.g., "sessionid cookie not found" and
the list of cookies) so tests don't silently proceed unauthenticated.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/runtests.py (1)
180-224:⚠️ Potential issue | 🟠 MajorCoverage regression: removed UI console-error checks without replacement.
This PR removes geo/mobile/topology-related Selenium coverage from the suite path. That hides regressions instead of fixing root causes. If the tests are flaky, keep them visible via
@unittest.skip(...)with linked issue IDs rather than deleting coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtests.py` around lines 180 - 224, Restore the removed UI console-error coverage by keeping the test_console_errors method but mark any flaky parts with unittest.skip and an issue reference instead of deleting them; specifically ensure the url_list and change_form_list checks still call self.open(...) / self.get_resource(...) followed by self.assertEqual([], self.console_error_check()) and self.assertIn("OpenWISP", self.base_driver.title), and where particular URLs or change_form entries are flaky wrap those iterations or individual assertions with `@unittest.skip`("flaky - see ISSUE-<id>") or conditional skipping so coverage remains visible while linking the tracking issue.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/runtests.py`:
- Around line 180-224: Restore the removed UI console-error coverage by keeping
the test_console_errors method but mark any flaky parts with unittest.skip and
an issue reference instead of deleting them; specifically ensure the url_list
and change_form_list checks still call self.open(...) / self.get_resource(...)
followed by self.assertEqual([], self.console_error_check()) and
self.assertIn("OpenWISP", self.base_driver.title), and where particular URLs or
change_form entries are flaky wrap those iterations or individual assertions
with `@unittest.skip`("flaky - see ISSUE-<id>") or conditional skipping so
coverage remains visible while linking the tracking issue.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 25f2a7b7-7ac5-4b0e-ada5-4c643c956f6e
📒 Files selected for processing (1)
tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: CI Build
🔇 Additional comments (2)
tests/runtests.py (2)
397-398: Entrypoint simplification looks good.Using
unittest.main(verbosity=2)is cleaner and keeps standard unittest behavior.
273-273: No action required. Theauto_create_checktask is properly defined in openwisp-monitoring 1.2.0 (the pinned version via~=1.2.0) and will be correctly registered in both theceleryandcelery_monitoringcontainers through the standardapp.autodiscover_tasks()mechanism incelery.py. No version-dependent task naming issues exist for this repository's dependency pins.> Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/runtests.py (2)
282-286:⚠️ Potential issue | 🟡 MinorDuplicate task entry:
delete_timeseriesappears twice.
openwisp_monitoring.monitoring.tasks.delete_timeseriesis listed at both line 282 and line 286. This will cause the test to pass even if the task is only registered once, potentially masking a regression.🔧 Proposed fix
"openwisp_monitoring.monitoring.tasks.timeseries_batch_write", "openwisp_monitoring.monitoring.tasks.timeseries_write", "openwisp_monitoring.monitoring.tasks.delete_timeseries", "openwisp_notifications.tasks.delete_ignore_object_notification", "openwisp_notifications.tasks.delete_notification", "openwisp_notifications.tasks.delete_obsolete_objects", - "openwisp_notifications.tasks.delete_old_notifications", + "openwisp_notifications.tasks.delete_old_notifications",Remove line 286 (the duplicate
delete_timeseriesentry).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtests.py` around lines 282 - 286, The test task list in tests/runtests.py contains a duplicate entry "openwisp_monitoring.monitoring.tasks.delete_timeseries" (appears twice); remove the second occurrence so the list only contains a single "openwisp_monitoring.monitoring.tasks.delete_timeseries" entry to avoid masking registration regressions.
180-223:⚠️ Potential issue | 🟠 MajorPR title/commit mismatch: Tests were removed rather than fixed.
The PR title states "[fix] Fixed the test script to execute all tests" but the commit message is "[chores] Removed failing tests". According to the changes, the following tests/checks were removed:
test_topology_graphtest_create_prefix_userstest_websocket_marker/admin/geo/location/add/fromtest_console_errorsRemoving failing tests masks underlying issues rather than addressing them. If these tests are failing due to legitimate regressions, the root causes should be investigated. If the tests are obsolete or testing deprecated features, consider documenting why they were removed.
Could you clarify why each test was removed and whether the underlying functionality is still working correctly?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtests.py` around lines 180 - 223, The PR removed several tests (e.g., test_topology_graph, test_create_prefix_users, test_websocket_marker and the /admin/geo/location/add/ case from test_console_errors in tests/runtests.py) instead of fixing them; restore the removed test cases inside test_console_errors (re-add the /admin/geo/location/add/ URL to url_list and re-enable any deleted test methods), run the tests to reproduce failures, and for each failing test (test_topology_graph, test_create_prefix_users, test_websocket_marker) either fix the underlying code/fixtures that cause the failure or, if the feature is truly deprecated, add a clear comment in the test file and remove the test with an explanatory commit message and updated PR title describing deprecation; update commit messages to reflect the actual change (fix vs remove) and include references to the affected test methods (test_console_errors, test_topology_graph, test_create_prefix_users, test_websocket_marker) so reviewers can track why each was changed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/runtests.py`:
- Around line 282-286: The test task list in tests/runtests.py contains a
duplicate entry "openwisp_monitoring.monitoring.tasks.delete_timeseries"
(appears twice); remove the second occurrence so the list only contains a single
"openwisp_monitoring.monitoring.tasks.delete_timeseries" entry to avoid masking
registration regressions.
- Around line 180-223: The PR removed several tests (e.g., test_topology_graph,
test_create_prefix_users, test_websocket_marker and the /admin/geo/location/add/
case from test_console_errors in tests/runtests.py) instead of fixing them;
restore the removed test cases inside test_console_errors (re-add the
/admin/geo/location/add/ URL to url_list and re-enable any deleted test
methods), run the tests to reproduce failures, and for each failing test
(test_topology_graph, test_create_prefix_users, test_websocket_marker) either
fix the underlying code/fixtures that cause the failure or, if the feature is
truly deprecated, add a clear comment in the test file and remove the test with
an explanatory commit message and updated PR title describing deprecation;
update commit messages to reflect the actual change (fix vs remove) and include
references to the affected test methods (test_console_errors,
test_topology_graph, test_create_prefix_users, test_websocket_marker) so
reviewers can track why each was changed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d591cb94-80e0-47e3-9763-f7dc23433c30
📒 Files selected for processing (1)
tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: CI Build
🔇 Additional comments (3)
tests/runtests.py (3)
237-237: LGTM: Calling logout() before testing password reset.The forgot password flow should be tested in a logged-out state, so this addition is correct.
398-399: Good simplification usingunittest.main().This change allows unittest to automatically discover and run all test classes and methods, which aligns with the PR objective of executing all tests. The previous manual
TestSuiteconstruction was error-prone if new tests weren't explicitly added.
274-276: Remove the inaccurate claim about task consolidation.The task name
auto_create_checkis not a consolidation of previous individual tasks. OpenWISP Monitoring has always used a single genericauto_create_checktask. Auto-creation for different check types (ping, config, iperf3, etc.) is controlled by Django settings (OPENWISP_MONITORING_AUTO_PING,OPENWISP_MONITORING_AUTO_DEVICE_CONFIG_CHECK, etc.), not separate Celery tasks. The tasksauto_create_config_check,auto_create_iperf3_check, andauto_create_pingnever existed.> Likely an incorrect or invalid review comment.
Removed failing tests from the test suite.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/runtests.py (1)
1-398:⚠️ Potential issue | 🟠 MajorPR title contradicts commit message and actual changes.
The PR title states "[fix] Fixed the test script to execute all tests" but the commit message says "[chores] Removed failing tests". Based on the AI summary, several tests were removed:
test_topology_graphtest_create_prefix_userstest_websocket_marker- URL
/admin/geo/location/add/fromtest_console_errorsRemoving failing tests is not the same as fixing the test script to execute all tests—it's the opposite. Deleting tests reduces test coverage and may hide underlying issues in the system.
Please clarify:
- Why were these tests failing?
- Is the underlying functionality being tested still working correctly?
- Should these tests be fixed rather than removed, or is the tested functionality no longer relevant?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtests.py` around lines 1 - 398, The PR title claims tests were "fixed to execute all tests" but the commit removed failing tests (test_topology_graph, test_create_prefix_users, test_websocket_marker, and the /admin/geo/location/add/ URL from test_console_errors) which contradicts that claim; please either (1) restore and fix those tests (locate them by name: test_topology_graph, test_create_prefix_users, test_websocket_marker and the reference to "/admin/geo/location/add/" in test_console_errors) so they pass, or (2) if removal is intentional, update the PR title and commit message to accurately state tests were removed and add a short justification in the PR description explaining why each test failed, whether the underlying functionality is still expected to work, and link any bug/issue tracking the work to re-enable or rewrite the tests (alternatively mark them as skipped/xfail with a clear TODO and issue reference).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/runtests.py`:
- Around line 1-398: The PR title claims tests were "fixed to execute all tests"
but the commit removed failing tests (test_topology_graph,
test_create_prefix_users, test_websocket_marker, and the
/admin/geo/location/add/ URL from test_console_errors) which contradicts that
claim; please either (1) restore and fix those tests (locate them by name:
test_topology_graph, test_create_prefix_users, test_websocket_marker and the
reference to "/admin/geo/location/add/" in test_console_errors) so they pass, or
(2) if removal is intentional, update the PR title and commit message to
accurately state tests were removed and add a short justification in the PR
description explaining why each test failed, whether the underlying
functionality is still expected to work, and link any bug/issue tracking the
work to re-enable or rewrite the tests (alternatively mark them as skipped/xfail
with a clear TODO and issue reference).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3cc13bc6-4726-4523-a2ce-44558b318093
📒 Files selected for processing (1)
tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: CI Build
🔇 Additional comments (3)
tests/runtests.py (3)
397-398: LGTM on the test runner simplification.Using
unittest.main(verbosity=2)is the standard and recommended approach for running tests. This is cleaner than manually constructing aTestSuiteand properly discovers and executes all test methods in the correct order.
234-247: No issue found. Theself.logout()call at the start oftest_forgot_password()is correct and necessary. Password reset should only be tested when not logged in. The GitHub Actions CI workflow confirms these tests pass, indicating thatSeleniumTestMixin.logout()handles the not-logged-in state gracefully.
274-274: Task nameauto_create_checkis correct for openwisp-monitoring v1.2.0 and later.The consolidated approach in openwisp-monitoring v1.2.0 uses a single
openwisp_monitoring.check.tasks.auto_create_checkCelery task that handles auto-creation for all check types based on enabled auto-create settings (e.g.,OPENWISP_MONITORING_AUTO_PING,OPENWISP_MONITORING_AUTO_DEVICE_CONFIG_CHECK, etc.). This confirms the test's expected task list is in sync with the current upstream implementation.
Checklist
Description of changes
The runtests.py script was executing only 1 specific tests, for this reason we did not notice the followign:
Now all the tests in the test suite are run, as it should have been.