feat: add Statistics & Anti-game chapters to release notes#305
feat: add Statistics & Anti-game chapters to release notes#305miroslavpojer wants to merge 2 commits intomasterfrom
Conversation
- Introduced a new feature to surface skip-label usage statistics in release notes. - Added `show-stats-chapters` input to toggle the display of statistics chapters. - Implemented `StatsChapters` class to collect and render statistics on skipped records. - Updated action inputs and configuration documentation to reflect new feature. - Added tests for the new statistics functionality and ensured existing tests are updated accordingly. - Created shared fixtures in conftest files for better test management.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 41 minutes and 27 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughThis PR introduces a new "Statistics & Anti-game Chapters" feature that tracks and displays skip-label usage statistics aggregated by PR author, issue author/assignee, issue label, and PR label. The feature is controlled via a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/unit/release_notes_generator/test_action_inputs.py (1)
401-410: Default-path test should assert the “input absent” fallback, not only the “true” value path.Line 401 currently verifies a truthy return, but it does not prove that missing input uses the
"true"default.✅ Suggested test adjustment
def test_get_show_stats_chapters_default_true(mocker): """show-stats-chapters defaults to True when the input is absent (default 'true').""" - mocker.patch("release_notes_generator.action_inputs.get_action_input", return_value="true") + mock_get = mocker.patch( + "release_notes_generator.action_inputs.get_action_input", + side_effect=lambda _name, default="": default, + ) assert ActionInputs.get_show_stats_chapters() is True + mock_get.assert_called_once_with("show-stats-chapters", "true")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/release_notes_generator/test_action_inputs.py` around lines 401 - 410, Update test_get_show_stats_chapters_default_true to simulate a missing input rather than returning the string "true": mock release_notes_generator.action_inputs.get_action_input to raise the same exception used when an input is absent (e.g., KeyError) or return None depending on the actual get_action_input behavior, then call ActionInputs.get_show_stats_chapters() and assert it returns True; this ensures the test verifies the "input absent" fallback path for ActionInputs.get_show_stats_chapters instead of merely the explicit "true" string path.release_notes_generator/chapters/stats_chapters.py (1)
25-25: Inject skip labels intoStatsChaptersinstead of readingActionInputshere.
StatsChaptersis pure aggregation/rendering logic, but it currently pulls runtime inputs itself. This couples it to env/input boundaries and complicates reuse/testing.♻️ Proposed refactor
@@ -from release_notes_generator.action_inputs import ActionInputs @@ - def __init__(self, print_empty_chapters: bool = True): + def __init__(self, print_empty_chapters: bool = True, skip_labels: set[str] | None = None): self.print_empty_chapters = print_empty_chapters + self._skip_labels = skip_labels or set() @@ - skip_labels = set(ActionInputs.get_skip_release_notes_labels()) @@ - non_skip_labels = [lbl for lbl in record.labels if lbl not in skip_labels] + non_skip_labels = [lbl for lbl in record.labels if lbl not in self._skip_labels]Then pass
skip_labels=set(ActionInputs.get_skip_release_notes_labels())at construction fromrelease_notes_generator/builder/builder.py.As per coding guidelines,
**/*.py: Keep pure logic free of environment access where practical, and prefer routing env/I/O through a single boundary module.Also applies to: 62-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@release_notes_generator/chapters/stats_chapters.py` at line 25, StatsChapters is pulling runtime inputs directly (ActionInputs.get_skip_release_notes_labels()) which couples pure aggregation/rendering logic to environment access; modify StatsChapters so it accepts skip_labels (e.g., a parameter on its __init__ or builder method) and remove any direct calls to ActionInputs inside release_notes_generator/chapters/stats_chapters.py (including the usages around the current block ~lines 62-77), then update release_notes_generator/builder/builder.py to call ActionInputs.get_skip_release_notes_labels() once and pass set(...) into the StatsChapters constructor; ensure all internal references in StatsChapters use the provided skip_labels instance variable.tests/unit/release_notes_generator/chapters/test_stats_chapters.py (1)
24-348: AddMockerFixtureannotations formockerin this test module.The new tests are solid functionally, but
mockerparams should be typed consistently with the repository test typing rule.✍️ Minimal pattern to apply
+from pytest_mock import MockerFixture @@ -def _make_stats(mocker, *, skip_labels=None, print_empty=True): +def _make_stats( + mocker: MockerFixture, *, skip_labels: list[str] | None = None, print_empty: bool = True +) -> StatsChapters: @@ -def test_switch_on_no_skipped(mocker, make_pr): +def test_switch_on_no_skipped(mocker: MockerFixture, make_pr):As per coding guidelines,
**/{conftest,test_*}.py: Annotate pytest fixture parameters withMockerFixture(frompytest_mock).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/release_notes_generator/chapters/test_stats_chapters.py` around lines 24 - 348, Add type annotations for the pytest-mock fixture: import MockerFixture from pytest_mock at top of the module and change every test and helper signature that takes the mocker fixture to accept mocker: MockerFixture (e.g., def _make_stats(mocker: MockerFixture, *, skip_labels=None, print_empty=True): and all tests like def test_switch_on_no_skipped(mocker: MockerFixture, make_pr):). Ensure the import is present and update all occurrences of the unannotated mocker parameter across the tests in this module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@release_notes_generator/action_inputs.py`:
- Around line 428-434: get_show_stats_chapters currently coerces any non-"true"
string to False, masking malformed inputs like "treu" and preventing downstream
validation in release_notes_generator.action_inputs; update
get_show_stats_chapters to perform strict parsing and validation at the input
layer: read the raw value via get_action_input(SHOW_STATS_CHAPTERS, "true"),
validate it is exactly "true" or "false" (case-insensitive), and return the
corresponding bool; if the value is invalid, raise a ValueError or call the
existing input validation helper so the error surfaces early (centralize
parsing/validation here rather than relying on later checks).
In `@tests/unit/release_notes_generator/builder/conftest.py`:
- Around line 19-25: The fixture _disable_stats_chapters needs pytest-mock
typing: add an import "from pytest_mock import MockerFixture" and annotate the
fixture signature to accept mocker: MockerFixture and return None (def
_disable_stats_chapters(mocker: MockerFixture) -> None:), keeping the existing
body that patches ActionInputs.get_show_stats_chapters; update only the
signature and add the import so conftest follows the fixture typing rule.
---
Nitpick comments:
In `@release_notes_generator/chapters/stats_chapters.py`:
- Line 25: StatsChapters is pulling runtime inputs directly
(ActionInputs.get_skip_release_notes_labels()) which couples pure
aggregation/rendering logic to environment access; modify StatsChapters so it
accepts skip_labels (e.g., a parameter on its __init__ or builder method) and
remove any direct calls to ActionInputs inside
release_notes_generator/chapters/stats_chapters.py (including the usages around
the current block ~lines 62-77), then update
release_notes_generator/builder/builder.py to call
ActionInputs.get_skip_release_notes_labels() once and pass set(...) into the
StatsChapters constructor; ensure all internal references in StatsChapters use
the provided skip_labels instance variable.
In `@tests/unit/release_notes_generator/chapters/test_stats_chapters.py`:
- Around line 24-348: Add type annotations for the pytest-mock fixture: import
MockerFixture from pytest_mock at top of the module and change every test and
helper signature that takes the mocker fixture to accept mocker: MockerFixture
(e.g., def _make_stats(mocker: MockerFixture, *, skip_labels=None,
print_empty=True): and all tests like def test_switch_on_no_skipped(mocker:
MockerFixture, make_pr):). Ensure the import is present and update all
occurrences of the unannotated mocker parameter across the tests in this module.
In `@tests/unit/release_notes_generator/test_action_inputs.py`:
- Around line 401-410: Update test_get_show_stats_chapters_default_true to
simulate a missing input rather than returning the string "true": mock
release_notes_generator.action_inputs.get_action_input to raise the same
exception used when an input is absent (e.g., KeyError) or return None depending
on the actual get_action_input behavior, then call
ActionInputs.get_show_stats_chapters() and assert it returns True; this ensures
the test verifies the "input absent" fallback path for
ActionInputs.get_show_stats_chapters instead of merely the explicit "true"
string path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 831c50a3-a544-4286-ad12-e90970570516
📒 Files selected for processing (14)
.github/copilot-instructions.mdREADME.mdaction.ymldocs/configuration_reference.mddocs/features/skip_labels.mddocs/features/stats_chapters.mdrelease_notes_generator/action_inputs.pyrelease_notes_generator/builder/builder.pyrelease_notes_generator/chapters/stats_chapters.pyrelease_notes_generator/utils/constants.pytests/unit/release_notes_generator/builder/conftest.pytests/unit/release_notes_generator/chapters/conftest.pytests/unit/release_notes_generator/chapters/test_stats_chapters.pytests/unit/release_notes_generator/test_action_inputs.py
| @staticmethod | ||
| def get_show_stats_chapters() -> bool: | ||
| """ | ||
| Get the show stats chapters parameter value from the action inputs. | ||
| """ | ||
| return get_action_input(SHOW_STATS_CHAPTERS, "true").lower() == "true" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Invalid show-stats-chapters values are silently accepted as False.
get_show_stats_chapters() coerces any non-"true" value to False, so the validation at Line 599-600 can never fail for malformed inputs (e.g., "treu"), causing silent behavior changes.
💡 Proposed fix (strict bool parsing with validation)
@@
`@staticmethod`
def get_show_stats_chapters() -> bool:
"""
Get the show stats chapters parameter value from the action inputs.
"""
- return get_action_input(SHOW_STATS_CHAPTERS, "true").lower() == "true"
+ raw = (get_action_input(SHOW_STATS_CHAPTERS, "true") or "").strip().lower()
+ return raw == "true"
@@
- show_stats_chapters = ActionInputs.get_show_stats_chapters()
- ActionInputs.validate_input(show_stats_chapters, bool, "Show stats chapters must be a boolean.", errors)
+ show_stats_chapters_raw = (get_action_input(SHOW_STATS_CHAPTERS, "true") or "").strip().lower()
+ if show_stats_chapters_raw not in ("true", "false"):
+ errors.append("Show stats chapters must be 'true' or 'false'.")
+ show_stats_chapters = show_stats_chapters_raw == "true"As per coding guidelines, **/*.py: Centralize parsing and validation in one input layer.
Also applies to: 599-600
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@release_notes_generator/action_inputs.py` around lines 428 - 434,
get_show_stats_chapters currently coerces any non-"true" string to False,
masking malformed inputs like "treu" and preventing downstream validation in
release_notes_generator.action_inputs; update get_show_stats_chapters to perform
strict parsing and validation at the input layer: read the raw value via
get_action_input(SHOW_STATS_CHAPTERS, "true"), validate it is exactly "true" or
"false" (case-insensitive), and return the corresponding bool; if the value is
invalid, raise a ValueError or call the existing input validation helper so the
error surfaces early (centralize parsing/validation here rather than relying on
later checks).
| @pytest.fixture(autouse=True) | ||
| def _disable_stats_chapters(mocker): | ||
| """Disable stats chapters for all existing builder tests to avoid output changes.""" | ||
| mocker.patch( | ||
| "release_notes_generator.builder.builder.ActionInputs.get_show_stats_chapters", | ||
| return_value=False, | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Add fixture type annotations for conftest compliance.
Line 20 should type the mocker parameter and return type to match the fixture typing rule.
✍️ Suggested patch
import pytest
+from pytest_mock import MockerFixture
`@pytest.fixture`(autouse=True)
-def _disable_stats_chapters(mocker):
+def _disable_stats_chapters(mocker: MockerFixture) -> None:
"""Disable stats chapters for all existing builder tests to avoid output changes."""
mocker.patch(
"release_notes_generator.builder.builder.ActionInputs.get_show_stats_chapters",
return_value=False,
)As per coding guidelines: "**/{conftest,test_*}.py: Annotate pytest fixture parameters with MockerFixture (from pytest_mock) ..."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/release_notes_generator/builder/conftest.py` around lines 19 - 25,
The fixture _disable_stats_chapters needs pytest-mock typing: add an import
"from pytest_mock import MockerFixture" and annotate the fixture signature to
accept mocker: MockerFixture and return None (def
_disable_stats_chapters(mocker: MockerFixture) -> None:), keeping the existing
body that patches ActionInputs.get_show_stats_chapters; update only the
signature and add the import so conftest follows the fixture typing rule.
#305) - Added integration test specification in SPEC.md to outline the scope and structure of tests. - Created conftest.py for shared fixtures and helpers for offline integration tests. - Implemented test_full_pipeline_snapshot.py to validate the complete release notes generation pipeline. - Added test_bulk_sub_issue_collector.py for live testing of BulkSubIssueCollector with real GitHub API. - Created fixtures/test_full_pipeline_snapshot.md to store expected output for snapshot tests. - Added initial structure for live tests in the live directory.
Overview
Introduces skip-label usage statistics as a new optional section in generated release notes.
show-stats-chaptersaction input to toggle visibility of statistics chaptersStatsChaptersclass to collect and render per-author and per-label counts of skipped records (PRs and issues), broken down byPullRequestRecordandIssueRecordActionInputs,action.yml, and configuration documentation to reflect the new inputautousefixture in a scopedconftest.py, preventing unintended output changesStatsChapters(32 tests) with sharedmake_pr/make_issuefactory fixtures extracted totests/unit/release_notes_generator/chapters/conftest.pyRelease Notes
show-stats-chaptersinputRelated
Closes #184
Summary by CodeRabbit
New Features
Documentation