PLG: Implement selective recording for Plugins_History to prevent unb…#1612
PLG: Implement selective recording for Plugins_History to prevent unb…#1612
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughIntroduce selective plugin-history recording by tracking object ids changed during a processing cycle; history rows are inserted only for objects flagged as changed. Added test DB helpers and a pytest suite validating the new behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as "Test Harness"
participant Processor as "process_plugin_events"
participant PluginCfg as "Plugin (config)"
participant DB as "SQLite (Plugins_Objects / Plugins_History)"
Test->>Processor: call process_plugin_events(db, plugin, events)
Processor->>DB: query Plugins_Objects (existing objects)
Processor->>PluginCfg: get_setting_value(...) (REPORT_ON stubbed)
Processor->>Processor: compare events vs existing, compute watched changes
alt object changed or new or missing-transition
Processor->>Processor: add idsHash to changed_this_cycle
end
Processor->>DB: upsert/merge Plugins_Objects
Processor->>DB: insert into Plugins_History only if idsHash in changed_this_cycle
DB-->>Processor: commit
Processor-->>Test: return/complete
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server/plugin.py (1)
820-831:⚠️ Potential issue | 🟠 MajorRecord recovery transitions (missing → present) in history.
At Line 820 onward, only
watched-changedis marked as changed for existing objects. If an object wasmissing-in-last-scanand reappears with unchanged watched values, it becomeswatched-not-changedand is skipped fromPlugins_History, even though its state changed.Proposed fix
else: - if tmpObjFromEvent.status == "watched-changed": - changed_this_cycle.add(tmpObjFromEvent.idsHash) - index = 0 for plugObj in pluginObjects: # find corresponding object for the event and merge if plugObj.idsHash == tmpObjFromEvent.idsHash: + if ( + plugObj.status == "missing-in-last-scan" + or tmpObjFromEvent.status == "watched-changed" + ): + changed_this_cycle.add(tmpObjFromEvent.idsHash) pluginObjects[index] = combine_plugin_objects( plugObj, tmpObjFromEvent )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin.py` around lines 820 - 831, The current loop only treats tmpObjFromEvent.status == "watched-changed" as a change, so objects that were "missing-in-last-scan" and now return as "watched-not-changed" are ignored and not recorded in Plugins_History; update the handling in the loop that iterates pluginObjects (referencing tmpObjFromEvent, pluginObjects, combine_plugin_objects) to detect a recovery transition (e.g., tmpObjFromEvent.previous_status == "missing-in-last-scan" or equivalent flag on tmpObjFromEvent) and record that idsHash in changed_this_cycle (or explicitly append a recovery entry to Plugins_History) even when tmpObjFromEvent.status == "watched-not-changed", then merge via combine_plugin_objects as before so history reflects the missing→present transition.
🧹 Nitpick comments (1)
test/server/test_plugin_history_filtering.py (1)
120-195: Add a recovery test formissing-in-last-scan→ present.The suite is strong, but it currently skips the “object reappears” path. Please add a case where a seeded
missing-in-last-scanobject is present again in events, and assert that a history row is written for that transition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/server/test_plugin_history_filtering.py` around lines 120 - 195, Add a new test that seeds an object with status "missing-in-last-scan" (use seed_plugin_object with the same PREFIX and id), monkeypatch plugin.get_setting_value to _no_report_on as in other tests, then create an events list containing make_plugin_event_row for that same id (so it appears again), call process_plugin_events(db, plugin, events), and assert plugin_history_rows(conn, PREFIX) contains exactly one row for that object (check r[2] equals the id and len(rows) == 1); name the test something like test_recovered_from_missing and follow the same setup pattern as the other tests (use plugin_db fixture, conn.cursor(), conn.commit()) so it integrates with the suite.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/db_test_helpers.py`:
- Around line 442-445: In the execute method of the cursor wrapper (def
execute(self, sql, params=None)), change the conditional that currently checks
"if params:" to explicitly check "if params is not None" so empty sequences or
mappings are passed through to self._cursor.execute; update the branch that
calls self._cursor.execute(sql, params) when params is not None and keep the
existing fallback to self._cursor.execute(sql) when params is None.
---
Outside diff comments:
In `@server/plugin.py`:
- Around line 820-831: The current loop only treats tmpObjFromEvent.status ==
"watched-changed" as a change, so objects that were "missing-in-last-scan" and
now return as "watched-not-changed" are ignored and not recorded in
Plugins_History; update the handling in the loop that iterates pluginObjects
(referencing tmpObjFromEvent, pluginObjects, combine_plugin_objects) to detect a
recovery transition (e.g., tmpObjFromEvent.previous_status ==
"missing-in-last-scan" or equivalent flag on tmpObjFromEvent) and record that
idsHash in changed_this_cycle (or explicitly append a recovery entry to
Plugins_History) even when tmpObjFromEvent.status == "watched-not-changed", then
merge via combine_plugin_objects as before so history reflects the
missing→present transition.
---
Nitpick comments:
In `@test/server/test_plugin_history_filtering.py`:
- Around line 120-195: Add a new test that seeds an object with status
"missing-in-last-scan" (use seed_plugin_object with the same PREFIX and id),
monkeypatch plugin.get_setting_value to _no_report_on as in other tests, then
create an events list containing make_plugin_event_row for that same id (so it
appears again), call process_plugin_events(db, plugin, events), and assert
plugin_history_rows(conn, PREFIX) contains exactly one row for that object
(check r[2] equals the id and len(rows) == 1); name the test something like
test_recovered_from_missing and follow the same setup pattern as the other tests
(use plugin_db fixture, conn.cursor(), conn.commit()) so it integrates with the
suite.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e61e1933-a299-4e64-b0b9-e8953327abd6
📒 Files selected for processing (3)
server/plugin.pytest/db_test_helpers.pytest/server/test_plugin_history_filtering.py
…ounded growth
Summary by CodeRabbit
Bug Fixes
Tests