Skip to content

[tests] Fixed runtests.py to execute all tests, removed broken tests#581

Merged
nemesifier merged 1 commit intomasterfrom
fix-tests
Mar 30, 2026
Merged

[tests] Fixed runtests.py to execute all tests, removed broken tests#581
nemesifier merged 1 commit intomasterfrom
fix-tests

Conversation

@pandafy
Copy link
Copy Markdown
Member

@pandafy pandafy commented Mar 27, 2026

Checklist

  • I have read the OpenWISP Contributing Guidelines.
  • I have manually tested the changes proposed in this pull request.
  • N/A I have written new test cases for new code and/or updated existing tests for changes to existing code.
  • N/A I have updated the documentation.

Description of changes

The runtests.py script was executing only 1 specific tests, for this reason we did not notice the followign:

Now all the tests in the test suite are run, as it should have been.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

Removed several Selenium-based tests from tests/runtests.py (test_topology_graph, test_create_prefix_users, test_websocket_marker). Adjusted test_console_errors by removing one URL and its associated mobile-location setup. Modified test_forgot_password to call self.logout() before requesting a password reset. Updated test_celery expected registered tasks (replaced three specific auto-create monitoring tasks with openwisp_monitoring.check.tasks.auto_create_check and removed select notification/monitoring tasks). Simplified the test runner entrypoint to use unittest.main(verbosity=2) instead of manual TestSuite construction.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title uses [tests] prefix instead of required [fix], [change], [feature], [qa], [ci], [chores], or [docs] format, and doesn't align with the PR objective indicating a fix. Update the title to use a valid type prefix such as [fix] or [chores] and ensure consistency with the PR objectives and commit message.
Description check ❓ Inconclusive PR description addresses the core changes, marks N/A for inapplicable checklist items, but lacks detail on specific tests removed and references an issue number without completing the template section. Clarify which specific tests were removed and why they're broken. Complete the 'Reference to Existing Issue' section with the issue number. Add a brief summary of the celery task changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Bug Fixes ✅ Passed PR does not fix core user-facing functionality bugs; it only modifies test infrastructure by removing failing tests and updating the test runner.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix-tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/runtests.py`:
- Around line 230-235: The loop over self.base_driver.get_cookies() should use a
singular loop variable (cookie) and must explicitly handle the case where no
"sessionid" is found: iterate through cookies (cookie) to build reqHeader when
cookie["name"] == "sessionid", then before calling
request.Request(prefix_pdf_file_path, headers=reqHeader) check if reqHeader is
empty and raise a clear error or fail the test (e.g., RuntimeError or
AssertionError) that includes context (e.g., "sessionid cookie not found" and
the list of cookies) so tests don't silently proceed unauthenticated.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a40e09a8-b88a-4c07-9e3b-87b12bba80ee

📥 Commits

Reviewing files that changed from the base of the PR and between 76cd872 and 82cfa4e.

📒 Files selected for processing (1)
  • tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Build
🔇 Additional comments (2)
tests/runtests.py (2)

491-492: LGTM!

Using unittest.main(verbosity=2) ensures automatic discovery and execution of all TestCase classes in the module, which directly addresses the PR objective. This is cleaner and more maintainable than explicit TestSuite construction.


367-367: The Celery task consolidation from three separate auto-create tasks to auto_create_check is correct and expected for OpenWISP Monitoring 1.2.x, as confirmed by the project changelog and current Dockerfile configuration. The codebase is properly aligned with this library version, and the test at line 367 correctly reflects the consolidated task name.

Comment on lines 230 to 235
reqHeader = {}
for cookies in self.base_driver.get_cookies():
if cookies["name"] == "sessionid":
reqHeader = {"Cookie": f"sessionid={cookies['value']}"}
break
curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider explicit handling when sessionid cookie is not found.

The fix correctly avoids assuming cookie order, but if no sessionid cookie exists, reqHeader remains empty and the request proceeds without authentication. This could lead to confusing test failures (e.g., 401/403) without indicating the actual root cause.

Also, the loop variable cookies should be cookie (singular) since each iteration yields a single cookie dict.

Proposed improvement
         reqHeader = {}
-        for cookies in self.base_driver.get_cookies():
-            if cookies["name"] == "sessionid":
-                reqHeader = {"Cookie": f"sessionid={cookies['value']}"}
+        for cookie in self.base_driver.get_cookies():
+            if cookie["name"] == "sessionid":
+                reqHeader = {"Cookie": f"sessionid={cookie['value']}"}
                 break
+        else:
+            self.fail("sessionid cookie not found in browser cookies")
         curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reqHeader = {}
for cookies in self.base_driver.get_cookies():
if cookies["name"] == "sessionid":
reqHeader = {"Cookie": f"sessionid={cookies['value']}"}
break
curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader)
reqHeader = {}
for cookie in self.base_driver.get_cookies():
if cookie["name"] == "sessionid":
reqHeader = {"Cookie": f"sessionid={cookie['value']}"}
break
else:
self.fail("sessionid cookie not found in browser cookies")
curlRequest = request.Request(prefix_pdf_file_path, headers=reqHeader)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtests.py` around lines 230 - 235, The loop over
self.base_driver.get_cookies() should use a singular loop variable (cookie) and
must explicitly handle the case where no "sessionid" is found: iterate through
cookies (cookie) to build reqHeader when cookie["name"] == "sessionid", then
before calling request.Request(prefix_pdf_file_path, headers=reqHeader) check if
reqHeader is empty and raise a clear error or fail the test (e.g., RuntimeError
or AssertionError) that includes context (e.g., "sessionid cookie not found" and
the list of cookies) so tests don't silently proceed unauthenticated.

@github-project-automation github-project-automation bot moved this from To do (general) to In progress in OpenWISP Contributor's Board Mar 27, 2026
@pandafy pandafy closed this Mar 27, 2026
@github-project-automation github-project-automation bot moved this from In progress to Done in OpenWISP Contributor's Board Mar 27, 2026
@pandafy pandafy reopened this Mar 30, 2026
@github-project-automation github-project-automation bot moved this from Done to In progress in OpenWISP Contributor's Board Mar 30, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/runtests.py (1)

180-224: ⚠️ Potential issue | 🟠 Major

Coverage regression: removed UI console-error checks without replacement.

This PR removes geo/mobile/topology-related Selenium coverage from the suite path. That hides regressions instead of fixing root causes. If the tests are flaky, keep them visible via @unittest.skip(...) with linked issue IDs rather than deleting coverage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtests.py` around lines 180 - 224, Restore the removed UI
console-error coverage by keeping the test_console_errors method but mark any
flaky parts with unittest.skip and an issue reference instead of deleting them;
specifically ensure the url_list and change_form_list checks still call
self.open(...) / self.get_resource(...) followed by self.assertEqual([],
self.console_error_check()) and self.assertIn("OpenWISP",
self.base_driver.title), and where particular URLs or change_form entries are
flaky wrap those iterations or individual assertions with `@unittest.skip`("flaky
- see ISSUE-<id>") or conditional skipping so coverage remains visible while
linking the tracking issue.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/runtests.py`:
- Around line 180-224: Restore the removed UI console-error coverage by keeping
the test_console_errors method but mark any flaky parts with unittest.skip and
an issue reference instead of deleting them; specifically ensure the url_list
and change_form_list checks still call self.open(...) / self.get_resource(...)
followed by self.assertEqual([], self.console_error_check()) and
self.assertIn("OpenWISP", self.base_driver.title), and where particular URLs or
change_form entries are flaky wrap those iterations or individual assertions
with `@unittest.skip`("flaky - see ISSUE-<id>") or conditional skipping so
coverage remains visible while linking the tracking issue.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 25f2a7b7-7ac5-4b0e-ada5-4c643c956f6e

📥 Commits

Reviewing files that changed from the base of the PR and between 82cfa4e and c96534f.

📒 Files selected for processing (1)
  • tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Build
🔇 Additional comments (2)
tests/runtests.py (2)

397-398: Entrypoint simplification looks good.

Using unittest.main(verbosity=2) is cleaner and keeps standard unittest behavior.


273-273: No action required. The auto_create_check task is properly defined in openwisp-monitoring 1.2.0 (the pinned version via ~=1.2.0) and will be correctly registered in both the celery and celery_monitoring containers through the standard app.autodiscover_tasks() mechanism in celery.py. No version-dependent task naming issues exist for this repository's dependency pins.

			> Likely an incorrect or invalid review comment.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/runtests.py (2)

282-286: ⚠️ Potential issue | 🟡 Minor

Duplicate task entry: delete_timeseries appears twice.

openwisp_monitoring.monitoring.tasks.delete_timeseries is listed at both line 282 and line 286. This will cause the test to pass even if the task is only registered once, potentially masking a regression.

🔧 Proposed fix
             "openwisp_monitoring.monitoring.tasks.timeseries_batch_write",
             "openwisp_monitoring.monitoring.tasks.timeseries_write",
             "openwisp_monitoring.monitoring.tasks.delete_timeseries",
             "openwisp_notifications.tasks.delete_ignore_object_notification",
             "openwisp_notifications.tasks.delete_notification",
             "openwisp_notifications.tasks.delete_obsolete_objects",
-            "openwisp_notifications.tasks.delete_old_notifications",
+            "openwisp_notifications.tasks.delete_old_notifications", 

Remove line 286 (the duplicate delete_timeseries entry).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtests.py` around lines 282 - 286, The test task list in
tests/runtests.py contains a duplicate entry
"openwisp_monitoring.monitoring.tasks.delete_timeseries" (appears twice); remove
the second occurrence so the list only contains a single
"openwisp_monitoring.monitoring.tasks.delete_timeseries" entry to avoid masking
registration regressions.

180-223: ⚠️ Potential issue | 🟠 Major

PR title/commit mismatch: Tests were removed rather than fixed.

The PR title states "[fix] Fixed the test script to execute all tests" but the commit message is "[chores] Removed failing tests". According to the changes, the following tests/checks were removed:

  • test_topology_graph
  • test_create_prefix_users
  • test_websocket_marker
  • /admin/geo/location/add/ from test_console_errors

Removing failing tests masks underlying issues rather than addressing them. If these tests are failing due to legitimate regressions, the root causes should be investigated. If the tests are obsolete or testing deprecated features, consider documenting why they were removed.

Could you clarify why each test was removed and whether the underlying functionality is still working correctly?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtests.py` around lines 180 - 223, The PR removed several tests
(e.g., test_topology_graph, test_create_prefix_users, test_websocket_marker and
the /admin/geo/location/add/ case from test_console_errors in tests/runtests.py)
instead of fixing them; restore the removed test cases inside
test_console_errors (re-add the /admin/geo/location/add/ URL to url_list and
re-enable any deleted test methods), run the tests to reproduce failures, and
for each failing test (test_topology_graph, test_create_prefix_users,
test_websocket_marker) either fix the underlying code/fixtures that cause the
failure or, if the feature is truly deprecated, add a clear comment in the test
file and remove the test with an explanatory commit message and updated PR title
describing deprecation; update commit messages to reflect the actual change (fix
vs remove) and include references to the affected test methods
(test_console_errors, test_topology_graph, test_create_prefix_users,
test_websocket_marker) so reviewers can track why each was changed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/runtests.py`:
- Around line 282-286: The test task list in tests/runtests.py contains a
duplicate entry "openwisp_monitoring.monitoring.tasks.delete_timeseries"
(appears twice); remove the second occurrence so the list only contains a single
"openwisp_monitoring.monitoring.tasks.delete_timeseries" entry to avoid masking
registration regressions.
- Around line 180-223: The PR removed several tests (e.g., test_topology_graph,
test_create_prefix_users, test_websocket_marker and the /admin/geo/location/add/
case from test_console_errors in tests/runtests.py) instead of fixing them;
restore the removed test cases inside test_console_errors (re-add the
/admin/geo/location/add/ URL to url_list and re-enable any deleted test
methods), run the tests to reproduce failures, and for each failing test
(test_topology_graph, test_create_prefix_users, test_websocket_marker) either
fix the underlying code/fixtures that cause the failure or, if the feature is
truly deprecated, add a clear comment in the test file and remove the test with
an explanatory commit message and updated PR title describing deprecation;
update commit messages to reflect the actual change (fix vs remove) and include
references to the affected test methods (test_console_errors,
test_topology_graph, test_create_prefix_users, test_websocket_marker) so
reviewers can track why each was changed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: d591cb94-80e0-47e3-9763-f7dc23433c30

📥 Commits

Reviewing files that changed from the base of the PR and between c96534f and c758cfc.

📒 Files selected for processing (1)
  • tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Build
🔇 Additional comments (3)
tests/runtests.py (3)

237-237: LGTM: Calling logout() before testing password reset.

The forgot password flow should be tested in a logged-out state, so this addition is correct.


398-399: Good simplification using unittest.main().

This change allows unittest to automatically discover and run all test classes and methods, which aligns with the PR objective of executing all tests. The previous manual TestSuite construction was error-prone if new tests weren't explicitly added.


274-276: Remove the inaccurate claim about task consolidation.

The task name auto_create_check is not a consolidation of previous individual tasks. OpenWISP Monitoring has always used a single generic auto_create_check task. Auto-creation for different check types (ping, config, iperf3, etc.) is controlled by Django settings (OPENWISP_MONITORING_AUTO_PING, OPENWISP_MONITORING_AUTO_DEVICE_CONFIG_CHECK, etc.), not separate Celery tasks. The tasks auto_create_config_check, auto_create_iperf3_check, and auto_create_ping never existed.

			> Likely an incorrect or invalid review comment.

Removed failing tests from the test suite.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/runtests.py (1)

1-398: ⚠️ Potential issue | 🟠 Major

PR title contradicts commit message and actual changes.

The PR title states "[fix] Fixed the test script to execute all tests" but the commit message says "[chores] Removed failing tests". Based on the AI summary, several tests were removed:

  • test_topology_graph
  • test_create_prefix_users
  • test_websocket_marker
  • URL /admin/geo/location/add/ from test_console_errors

Removing failing tests is not the same as fixing the test script to execute all tests—it's the opposite. Deleting tests reduces test coverage and may hide underlying issues in the system.

Please clarify:

  1. Why were these tests failing?
  2. Is the underlying functionality being tested still working correctly?
  3. Should these tests be fixed rather than removed, or is the tested functionality no longer relevant?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtests.py` around lines 1 - 398, The PR title claims tests were
"fixed to execute all tests" but the commit removed failing tests
(test_topology_graph, test_create_prefix_users, test_websocket_marker, and the
/admin/geo/location/add/ URL from test_console_errors) which contradicts that
claim; please either (1) restore and fix those tests (locate them by name:
test_topology_graph, test_create_prefix_users, test_websocket_marker and the
reference to "/admin/geo/location/add/" in test_console_errors) so they pass, or
(2) if removal is intentional, update the PR title and commit message to
accurately state tests were removed and add a short justification in the PR
description explaining why each test failed, whether the underlying
functionality is still expected to work, and link any bug/issue tracking the
work to re-enable or rewrite the tests (alternatively mark them as skipped/xfail
with a clear TODO and issue reference).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/runtests.py`:
- Around line 1-398: The PR title claims tests were "fixed to execute all tests"
but the commit removed failing tests (test_topology_graph,
test_create_prefix_users, test_websocket_marker, and the
/admin/geo/location/add/ URL from test_console_errors) which contradicts that
claim; please either (1) restore and fix those tests (locate them by name:
test_topology_graph, test_create_prefix_users, test_websocket_marker and the
reference to "/admin/geo/location/add/" in test_console_errors) so they pass, or
(2) if removal is intentional, update the PR title and commit message to
accurately state tests were removed and add a short justification in the PR
description explaining why each test failed, whether the underlying
functionality is still expected to work, and link any bug/issue tracking the
work to re-enable or rewrite the tests (alternatively mark them as skipped/xfail
with a clear TODO and issue reference).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3cc13bc6-4726-4523-a2ce-44558b318093

📥 Commits

Reviewing files that changed from the base of the PR and between c758cfc and d69e1f0.

📒 Files selected for processing (1)
  • tests/runtests.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: CI Build
🔇 Additional comments (3)
tests/runtests.py (3)

397-398: LGTM on the test runner simplification.

Using unittest.main(verbosity=2) is the standard and recommended approach for running tests. This is cleaner than manually constructing a TestSuite and properly discovers and executes all test methods in the correct order.


234-247: No issue found. The self.logout() call at the start of test_forgot_password() is correct and necessary. Password reset should only be tested when not logged in. The GitHub Actions CI workflow confirms these tests pass, indicating that SeleniumTestMixin.logout() handles the not-logged-in state gracefully.


274-274: Task name auto_create_check is correct for openwisp-monitoring v1.2.0 and later.

The consolidated approach in openwisp-monitoring v1.2.0 uses a single openwisp_monitoring.check.tasks.auto_create_check Celery task that handles auto-creation for all check types based on enabled auto-create settings (e.g., OPENWISP_MONITORING_AUTO_PING, OPENWISP_MONITORING_AUTO_DEVICE_CONFIG_CHECK, etc.). This confirms the test's expected task list is in sync with the current upstream implementation.

@nemesifier nemesifier changed the title [fix] Fixed the test script to execute all tests [fix] Fixed the test script to execute all tests, removed failing tests Mar 30, 2026
@nemesifier nemesifier changed the title [fix] Fixed the test script to execute all tests, removed failing tests [tests] Fixed runtests.py to execute all tests, removed broken tests Mar 30, 2026
@nemesifier nemesifier added the test This is a testing issue. label Mar 30, 2026
@nemesifier nemesifier merged commit b8e9836 into master Mar 30, 2026
5 checks passed
@nemesifier nemesifier deleted the fix-tests branch March 30, 2026 18:07
@github-project-automation github-project-automation bot moved this from In progress to Done in OpenWISP Contributor's Board Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

test This is a testing issue.

Projects

Development

Successfully merging this pull request may close these issues.

2 participants