Skip to content

Conversation

@CoMPaTech
Copy link
Owner

@CoMPaTech CoMPaTech commented Aug 13, 2025

Summary by CodeRabbit

  • New Features
    • Expanded data redaction to cover IPs inside lists of dictionaries, improving privacy.
  • Bug Fixes
    • Enforced UTF-8 encoding for fixture generation to prevent encoding issues.
  • Documentation
    • Added changelog entry for 0.2.10.
  • Tests
    • Introduced pytest hook and lint directives to stabilize CI.
  • Chores
    • Upgraded CI artifact handling and coverage aggregation.
    • Added pylint to test dependencies and introduced pylint hook.
    • Broadened pre-commit file coverage for scripts.
    • Bumped version to 0.2.10.

@coderabbitai
Copy link

coderabbitai bot commented Aug 13, 2025

Walkthrough

CI workflow updates artifact handling and coverage aggregation paths. Pre-commit expands file globs, adds pytest/pylint hooks, and aligns mypy/bandit filters. Version bumped to 0.2.10 with changelog entry. Core changes: derived_data now requires a response; data redaction handles lists of dicts with IPs. Scripts add UTF-8 I/O and interface validation. Tests add pylint directives.

Changes

Cohort / File(s) Summary
CI Workflow
.github/workflows/verify.yml
Switch actions/download-artifact to v5 with pattern coverage-, merge-multiple true, path to artifacts; coverage aggregation reads artifacts/.coverage.
Pre-commit & Tooling
.pre-commit-config.yaml, requirements-test.txt
Broaden Python file globs to include script; add local pytest and pylint hooks; add pylint==3.3.7 to test requirements.
Versioning & Changelog
pyproject.toml, CHANGELOG.md
Bump version 0.2.9 → 0.2.10; add 0.2.10 changelog entry noting maintenance and new lint/test hooks.
Core Library
airos/airos8.py, airos/data.py
AirOS.derived_data now requires explicit response param (no default); redaction gains support for lists of dicts with "addr" IPs, replacing with 127.0.0.3 or ::1; MAC redaction comment updated.
Scripts
script/generate_ha_fixture.py, script/mashumaro-step-debug.py
Enforce UTF-8 file I/O; rename internal path vars to private style; import Interface and validate deserialization of interfaces.
Tests
tests/conftest.py, tests/test_discovery.py
Add pylint disables for redefined-outer-name and unnecessary-default-type-args; no runtime changes.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant Redactor (airos.data)

  Caller->>Redactor (airos.data): redact(input_data)
  alt MAC string
    Redactor->>Redactor: Replace prefix with 00:11:22:33: + last 5 chars
  else IP string
    Redactor->>Redactor: Replace with 127.0.0.1/::1 as dummy
  else List[Dict] with "addr" IPs
    Redactor->>Redactor: For each item, set addr to 127.0.0.3 or ::1
  else Other types
    Redactor->>Redactor: Apply existing rules / recurse
  end
  Redactor-->>Caller: redacted_data
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15–20 minutes

Possibly related PRs

Poem

A rabbit taps keys with a lint-loving grin,
Artifacts tidy, tests neatly begin.
IPs wear masks, MACs don a guise,
Scripts speak UTF-8, no more surprise.
Version hops forward—0.2.10’s here—
Thump-thump! Clean burrow, crystal-clear. 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch upd

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@codecov
Copy link

codecov bot commented Aug 13, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 96.07%. Comparing base (bbeded4) to head (3bcfb57).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #62      +/-   ##
==========================================
+ Coverage   96.05%   96.07%   +0.02%     
==========================================
  Files           9        9              
  Lines        1345     1352       +7     
==========================================
+ Hits         1292     1299       +7     
  Misses         53       53              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@sonarqubecloud
Copy link

@CoMPaTech CoMPaTech merged commit 40ea0e2 into main Aug 13, 2025
19 of 21 checks passed
@CoMPaTech CoMPaTech deleted the upd branch August 13, 2025 18:16
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
airos/airos8.py (1)

196-247: Promote derived_data to @staticmethod — update one class-qualified call

derived_data does not use self; make it a @staticmethod. I searched the repo and found one call site that passes a dummy self and must be updated.

Files to change:

  • airos/airos8.py — make derived_data a staticmethod.
  • script/generate_ha_fixture.py — remove the dummy None arg when calling AirOS.derived_data(...).

Suggested diffs:

# airos/airos8.py
-    def derived_data(self, response: dict[str, Any]) -> dict[str, Any]:
+    @staticmethod
+    def derived_data(response: dict[str, Any]) -> dict[str, Any]:
# script/generate_ha_fixture.py
-                derived_data = AirOS.derived_data(None, source_data)  # type: ignore[arg-type]
+                derived_data = AirOS.derived_data(source_data)

Notes:

  • Instance calls (e.g., airos_device.derived_data(...)) and self.derived_data(...) will continue to work after making it a staticmethod.
  • I found these call sites in the repo: script/generate_ha_fixture.py:48 (class-qualified with dummy None), tests/test_airos8.py:110 and 122 (instance calls), airos/airos8.py:270 (self.derived_data call). Update the script call above; tests need no change.
🧹 Nitpick comments (7)
tests/conftest.py (1)

13-14: Local pylint disables for tests are appropriate.

Disabling redefined-outer-name and unnecessary-default-type-args at the test module level is a pragmatic choice for pytest fixtures and complex generics.

If this pattern repeats widely, consider centralizing via per-file directives at the top of each test module or a project-level policy documented in CONTRIBUTING to keep consistency.

script/generate_ha_fixture.py (2)

45-47: Good UTF-8 enforcement; prefer json.load for streams

Reading via json.load(source) is simpler, faster, and avoids holding the whole file in memory.

Apply this diff:

-                with open(base_fixture_path, encoding="utf-8") as source:
-                    source_data = json.loads(source.read())
+                with open(base_fixture_path, encoding="utf-8") as source:
+                    source_data = json.load(source)

51-53: Write UTF-8 JSON without ASCII escaping

Since you opened the file with encoding="utf-8", set ensure_ascii=False so non-ASCII characters aren’t escaped, making fixtures more human-readable.

Apply this diff:

-                with open(new_fixture_path, "w", encoding="utf-8") as new:
-                    json.dump(new_data.to_dict(), new, indent=2, sort_keys=True)
+                with open(new_fixture_path, "w", encoding="utf-8") as new:
+                    json.dump(new_data.to_dict(), new, indent=2, sort_keys=True, ensure_ascii=False)
airos/data.py (1)

74-89: LGTM; optional: simplify to a comprehension for readability

The list-of-dicts redaction is correct and family-aware. Consider a comprehension to trim boilerplate.

Apply this diff:

-                    redacted_list = []
-                    for item in v:
-                        redacted_item = item.copy()
-                        redacted_item["addr"] = (
-                            "127.0.0.3"
-                            if ipaddress.ip_address(redacted_item["addr"]).version == 4
-                            else "::1"
-                        )
-                        redacted_list.append(redacted_item)
-                    redacted_d[k] = redacted_list  # type: ignore[assignment]
+                    redacted_d[k] = [
+                        {**item,
+                         "addr": (
+                             "127.0.0.3"
+                             if ipaddress.ip_address(item["addr"]).version == 4
+                             else "::1"
+                         )}
+                        for item in v
+                    ]  # type: ignore[assignment]
script/mashumaro-step-debug.py (3)

9-13: Consolidate sys.path handling and prefer precedence with insert(0).

  • You now set up project-root on sys.path at module import time, but there’s a duplicate path setup in main (Lines 27-32). This is redundant; keep one location.
  • Prefer insert(0) to ensure the local repo is imported over any globally installed airos package.

Apply this within the selected lines:

-if _project_root_dir not in sys.path:
-    sys.path.append(_project_root_dir)
+if _project_root_dir not in sys.path:
+    sys.path.insert(0, _project_root_dir)

Follow-up:

  • Remove the duplicate path logic at Lines 27-32 to avoid confusion and dead code.

34-35: Use json.load and consider catching I/O/JSON errors earlier.

  • Minor simplification: use json.load instead of json.loads(f.read()).
-with open(sys.argv[1], encoding="utf-8") as f:
-    data = json.loads(f.read())
+with open(sys.argv[1], encoding="utf-8") as f:
+    data = json.load(f)
  • Consider moving the file open/parse into the try block and handling FileNotFoundError/JSONDecodeError to keep failures within your unified error reporting.

66-73: Make Interface validation resilient to missing keys and reduce KeyError risk.

Guard against missing 'interfaces' and missing 'ifname' before logging/deserialization:

-interfaces = data["interfaces"]
-for i, interface_data in enumerate(interfaces):
-    _LOGGER.info("    -> Checking Interface object at index %s...", i)
-    _LOGGER.info("         Interface should be %s.", interface_data["ifname"])
-    interface_obj = Interface.from_dict(interface_data)  # noqa: F841
-    _LOGGER.info("         Success! Interface is valid.")
+interfaces = data.get("interfaces")
+if interfaces is None:
+    _LOGGER.info("  -> No 'interfaces' present; skipping Interface checks.")
+else:
+    for i, interface_data in enumerate(interfaces):
+        _LOGGER.info("    -> Checking Interface object at index %s...", i)
+        _LOGGER.info(
+            "         Interface should be %s.",
+            interface_data.get("ifname", "<missing>"),
+        )
+        interface_obj = Interface.from_dict(interface_data)  # noqa: F841
+        _LOGGER.info("         Success! Interface is valid.")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bbeded4 and 3bcfb57.

📒 Files selected for processing (11)
  • .github/workflows/verify.yml (1 hunks)
  • .pre-commit-config.yaml (2 hunks)
  • CHANGELOG.md (1 hunks)
  • airos/airos8.py (1 hunks)
  • airos/data.py (2 hunks)
  • pyproject.toml (1 hunks)
  • requirements-test.txt (1 hunks)
  • script/generate_ha_fixture.py (1 hunks)
  • script/mashumaro-step-debug.py (3 hunks)
  • tests/conftest.py (1 hunks)
  • tests/test_discovery.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
airos/airos8.py (1)
tests/test_airos8.py (5)
  • test_derived_data_no_interfaces_key (105-109)
  • test_status_missing_interface_key_data (86-101)
  • test_status_missing_required_key_in_json (231-260)
  • test_derived_data_no_br0_eth0_ath0 (113-123)
  • test_status_invalid_json_response (70-82)
🔇 Additional comments (8)
pyproject.toml (1)

7-7: Version bump verified — pyproject.toml and CHANGELOG.md match

pyproject.toml contains version = "0.2.10" (line 7); CHANGELOG.md has "## [0.2.10] - 2025-08-13" (line 5); only one version field present.

Files to note:

  • pyproject.toml — line 7: version = "0.2.10"
  • CHANGELOG.md — line 5: ## [0.2.10] - 2025-08-13

Please push the git tag (e.g., v0.2.10) and publish release artifacts.

requirements-test.txt (1)

12-12: Confirm pylint pin OK — please ensure CI uses Python 3.13

pylint==3.3.7 adds official support for Python 3.13 (it must be executed on Python >=3.9).

  • Files to note:
    • requirements-test.txt:12 — pylint==3.3.7
    • pyproject.toml:22 — requires-python = ">=3.13"

Action items:

  • Confirm your CI runners execute Pylint with Python >=3.9 (preferably 3.13 to match pyproject).
  • Ensure Astroid is upgraded to a version compatible with Pylint 3.3.7 if you encounter parsing issues.
CHANGELOG.md (1)

5-11: Changelog entry for 0.2.10 reads well.

Entry matches the version bump and succinctly captures the maintenance scope.

tests/test_discovery.py (1)

18-19: Per-file pylint suppression is fine here.

redefined-outer-name is common with pytest fixtures; keeping it local avoids globally muting the rule.

.github/workflows/verify.yml (1)

148-152: Artifact download: v5 + pattern + merge look correct

Using actions/download-artifact@v5 with pattern: coverage-* and merge-multiple: true, targeting ${{ github.workspace }}/artifacts is correct and matches the upload-naming convention.

.pre-commit-config.yaml (2)

52-52: Broadened globs to include script/ across bandit, typing-update, and mypy

Consistent coverage of the script directory keeps tooling aligned with the new hooks.

Also applies to: 78-78, 104-105


85-97: run-in-env.sh executable — pytest & pylint pre-commit hooks validated

Verified: script/run-in-env.sh is present and executable, so the pytest and pylint hooks in .pre-commit-config.yaml (lines 85–97) can invoke the shim locally.

  • .pre-commit-config.yaml (lines 85–97): pytest & pylint hooks using script/run-in-env.sh
  • script/run-in-env.sh: verified executable

No further changes required.

script/mashumaro-step-debug.py (1)

15-15: Interface import addition looks correct.

Importing Interface aligns with the new validation logic later in the script.

run: |
. venv/bin/activate
coverage combine coverage*/.coverage*
coverage combine artifacts/.coverage*
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Double-check coverage discovery after layout change

Combining via coverage combine artifacts/.coverage* should work with merged artifacts. Verify at least one .coverage file lands there.

Run this to confirm:


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Downloaded artifacts:"
ls -la artifacts || true
echo "Coverage files discovered:"
ls -la artifacts/.coverage* || { echo "No coverage files found under artifacts/"; exit 1; }

Length of output: 418


Coverage combine will fail — artifacts/.coverage not found*

I ran the verification script: the artifacts/ directory does not exist and no .coverage files were found, so the step will fail at runtime.

  • Location: .github/workflows/verify.yml — line 156: coverage combine artifacts/.coverage*
  • Evidence: script output showed "ls: cannot access 'artifacts': No such file or directory" and "No coverage files found under artifacts/"
  • Action items:
    • Ensure the job that produces coverage uploads the files (actions/upload-artifact) and this workflow downloads them (actions/download-artifact) into artifacts/.
    • Or update the glob/path to the actual location where coverage files are produced.
    • As a safety measure, guard the combine step or fail with a clear message if no coverage files are present.
🤖 Prompt for AI Agents
.github/workflows/verify.yml around line 156: the step runs "coverage combine
artifacts/.coverage*" but artifacts/ doesn't exist and no .coverage files are
present; update the workflow to either (A) download uploaded coverage artifacts
into artifacts/ using actions/download-artifact (ensure the producing job
uploads them with actions/upload-artifact), (B) change the glob to the actual
path where coverage files are produced, or (C) add a guard that checks for files
before running coverage combine and prints a clear error and exits non-zero if
none found; implement one of these fixes so the step never fails due to a
missing directory or missing coverage files.

Comment on lines +64 to 66
# Redact only the last part of a MAC address to a dummy value
redacted_d[k] = "00:11:22:33:" + v.replace("-", ":").upper()[-5:]
elif isinstance(v, str) and is_ip_address(v):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Preserve IP family when redacting (IPv6 should redact to ::1)

Single-string IPs and lists of IP strings currently redact to 127.0.0.3 even for IPv6. Align with the new list-of-dicts behavior by preserving IPv4 vs IPv6 families.

Apply this diff:

-                elif isinstance(v, str) and is_ip_address(v):
-                    # Redact to a dummy local IP address
-                    redacted_d[k] = "127.0.0.3"
+                elif isinstance(v, str) and is_ip_address(v):
+                    # Redact to a dummy local IP address, preserving IP family
+                    redacted_d[k] = (
+                        "127.0.0.3"
+                        if ipaddress.ip_address(v).version == 4
+                        else "::1"
+                    )
@@
-                elif isinstance(v, list) and all(
-                    isinstance(i, str) and is_ip_address(i) for i in v
-                ):
-                    # Redact list of IPs to a dummy list
-                    redacted_d[k] = ["127.0.0.3"]  # type: ignore[assignment]
+                elif isinstance(v, list) and all(
+                    isinstance(i, str) and is_ip_address(i) for i in v
+                ):
+                    # Redact list of IPs to a dummy list, preserving each item's IP family
+                    redacted_d[k] = [
+                        "127.0.0.3" if ipaddress.ip_address(i).version == 4 else "::1"
+                        for i in v
+                    ]  # type: ignore[assignment]

Also applies to: 69-74

Comment on lines 48 to 50
derived_data = AirOS.derived_data(None, source_data) # type: ignore[arg-type]
new_data = AirOSData.from_dict(derived_data)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Stop calling an instance method with a dummy self; make derived_data a staticmethod

This script currently calls AirOS.derived_data(None, source_data), which is a code smell and will trip type checkers/linters. The method doesn’t use self; promote it to a @staticmethod and call it with a single argument.

Apply this diff here:

-                derived_data = AirOS.derived_data(None, source_data)  # type: ignore[arg-type]
+                derived_data = AirOS.derived_data(source_data)

And update the method in airos/airos8.py (outside this file) as:

# In airos/airos8.py
@staticmethod
def derived_data(response: dict[str, Any]) -> dict[str, Any]:
    ...

I can push the coordinated changes across both files if you want.

🤖 Prompt for AI Agents
In script/generate_ha_fixture.py around lines 48-50, stop calling the instance
method with a dummy self; replace AirOS.derived_data(None, source_data) with
AirOS.derived_data(source_data) and remove the type: ignore. Also update
airos/airos8.py so derived_data is declared as a @staticmethod with signature
def derived_data(response: dict[str, Any]) -> dict[str, Any]: (remove self), and
update any other callers to pass only the response argument.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants