Group Dependabot updates into weekly per-ecosystem grouped PRs#236
Conversation
There was a problem hiding this comment.
Pull request overview
Updates Dependabot configuration to reduce PR noise by batching dependency updates into a weekly, cross-ecosystem grouped workflow (GitHub Actions, NuGet, npm).
Changes:
- Adds a
multi-ecosystem-groupssection and assigns all update ecosystems to a singleall-dependenciesgroup. - Switches cadence from daily to weekly (intended via the new group schedule).
- Broadens update matching with
patterns: ["*"]for each ecosystem.
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
- Fix .github/dependabot.yml: remove invalid 'multi-ecosystem-groups' top-level key, remove invalid 'multi-ecosystem-group' per-entry keys, remove misplaced 'patterns:' at entry level, add missing 'schedule:' to each entry, add correct 'groups:' blocks with patterns inside each entry per Dependabot v2 spec - Add scripts/lint-dependabot.ps1: validates dependabot.yml against the Dependabot v2 schema with error codes DEP001-DEP006 covering version check, invalid top-level keys, invalid per-entry keys, misplaced patterns, missing schedule, and groups missing patterns - Add scripts/tests/test-lint-dependabot.ps1: 9 tests covering all error codes, valid configurations, and regression against the exact broken config that was previously shipped - Add .github/workflows/dependabot-lint.yml: CI workflow that runs lint:dependabot on pull_request, push to main, and workflow_dispatch - Wire lint:dependabot into validate:content npm script - Wire test:lint-dependabot into validate:tests npm script - Wire dependabot schema check into .githooks/pre-commit (section 8b) - Add Unity .meta files for all new files Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
Fixes CodeQL alert for missing GITHUB_TOKEN permissions scope. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
… lint Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/942e325b-e8c0-439f-90e3-6955e37ed929 Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
All reviewer feedback was valid — here's what was done to address it comprehensively: Root cause: The previous commit used three Dependabot v2 keys that don't exist in the spec ( Fixes applied (commit
|
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
…orcement, comment fix, Write-Info usage Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/d583012e-79f9-499c-ba1f-ae24458a02d2 Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
All four reviewer comments addressed in commit
On the comma operator ( Test count: 9 → 12, all pass. |
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
…ator Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/95c76b9f-26cd-476d-9bcf-bffb62c63381 Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
Both comments addressed in commit
Test count: 12 → 13. All 13 pass. |
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
…vocations in try/catch Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/bdbd4fa4-879e-4ac2-90d4-4280652bec04 Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
Both comments addressed in commit
All 15 tests pass. |
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
…values; add 3 tests Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/366eca0e-533c-4bf8-96ba-5eaaf523432d Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
Fixed in commit The DEP001 |
|
@copilot apply changes based on the comments in this thread Can you parse through the feedback, determine if any of it is relevant, and if it is, provide a comprehensive implementation to address the identified issues and any similar issues? Consider automation, such as tests or githooks to help prevent this category of issue. Please add additional diagnostics to help understand the issue(s). Consider simplifying if there are too many overlapping concerns, too much complexity, or too much fragility. When changing files, make sure we abide by repo guidelines and rules. If there is any new knowledge learned, be sure to update llm info appropriately. The goal is to understand why the issues are happening, understand the root caus(es), apply relevant fixes, and create infrastructure such that the entire category of issues can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. If the feedback is relevant, when addressing it, consider the code base as a whole. Are there other files or code areas that this could apply to? If so, do a sweep of those to see if they exhibit similar issues, and if they do, apply similar, appropriate fixes. The idea is to prevent the entire concept and class of issue from happening in the future. The goal is to understand why feedback is being given, understand the root cause of the feedback, apply relevant fixes, and create infrastructure such that the issues the feedback uncovered can not be made again. I'm only interested in general, complete fixes. I want to avoid fragility and make these checks extremey robust and reliable. Feel free to do web searches to understand modern techniques, approaches, and best practices to this and any similar issues. Please use sub-agents to accomplish these goals. Once a sub-agent is done, have another sub-agent review its work in an adversarial fashion. If there are any recommendations, have another sub-agent consider them and implement them. Repeat this process in a loop until all sub-agents achieve consensus that the result is incredibly high quality (110/100, exceptional work, absolutely zero issues, minor or otherwise) and does not require any revisions. I want 110% here, give it all you've got! |
…ests pass Agent-Logs-Url: https://github.com/wallstop/unity-helpers/sessions/20b05273-6319-41c7-bcc7-343c7a2a1805 Co-authored-by: wallstop <1045249+wallstop@users.noreply.github.com>
All three comments addressed in commit
Test count: 18 → 20. All pass. |
Description
Reduces Dependabot noise by batching dependency updates into weekly, per-ecosystem grouped PRs and adds schema validation infrastructure to prevent invalid Dependabot v2 configuration from being committed.
What changed:
.github/dependabot.yml— Fixed to use valid Dependabot v2 syntax: restored the requiredschedule: {interval: weekly}block on all three entries (github-actions,nuget,npm), replaced the non-existentmulti-ecosystem-groups/multi-ecosystem-groupkeys and misplacedpatterns:with the correctgroups: {all-dependencies: {patterns: ["*"]}}block nested inside each entry. Note: Dependabot v2 does not support cross-ecosystem grouping; updates from different ecosystems are always separate PRs, but updates within each ecosystem are now batched into a single PR.scripts/lint-dependabot.ps1— New structural validator for.github/dependabot.ymlthat validates all provided paths (not just the first), and detects seven named error codes:DEP001— Root-levelversion: 2is missing or does not appear before theupdates:section (scanning stops only atupdates:, not at other valid top-level keys likeregistries:); accepts bare2, double-quoted"2", single-quoted'2', and optional trailing inline comments; a nestedversion: 2inside another block does not satisfy this checkDEP002— Invalidmulti-ecosystem-groups:top-level keyDEP003— Invalidmulti-ecosystem-group:inside an entryDEP004—patterns:at entry level instead of insidegroups:DEP005— An entry missing aschedule:blockDEP006— Agroups:entry missingpatterns:(reported at the group item's declaration line, not the parser's current position)DEP007— Noupdates:section found in the configuration (a file with onlyversion: 2and noupdates:block is now an explicit error)scripts/tests/test-lint-dependabot.ps1— 20 tests covering all error codes, multi-path validation, DEP001 position enforcement, DEP001 false-positive prevention for non-version:top-level keys, DEP001 root-level enforcement (nestedversion: 2must not satisfy check), DEP001 tolerance for all valid YAML scalar forms (version: 2,version: "2",version: '2',version: 2 # comment), DEP006 line-number accuracy, comment-in-groups regression, missingupdates:section detection (DEP007), and a regression test against the exact broken config that prompted this fix. All pass. Linter invocations in test helpers are wrapped in try/catch so unexpected exceptions are reported as failed tests and never abort the suite (matching thetest-lint-odin-undo-safety.ps1pattern)..github/workflows/dependabot-lint.yml— CI workflow (following theodin-undo-safety-lint.ymlpattern:shell: pwsh,paths:filter,permissions: contents: readat workflow level) that runs tests then lint on every relevant change.package.json— Addedlint:dependabotandtest:lint-dependabotscripts; wired intovalidate:contentandvalidate:testsrespectively..githooks/pre-commit— Added section 8b: runslint-dependabot.ps1whendependabot.ymlis staged, catching schema errors at commit time.LLM documentation — Updated
linter-reference.md,formatting-and-linting.md, andcontext.mdto document the new command and pre-commit step.Validator improvements (from review feedback):
lint-dependabot.ps1are now validated (not just the first match); errors are prefixed with the filename when multiple files are checked.version: 2appears beforeupdates:, scanning past any other top-level key (e.g.registries:) without false-triggering. Only reachingupdates:without having seenversion: 2fails the check.^version\s*:) so aversion: 2value nested inside another block (e.g. underregistries:) cannot satisfy the check.version: 2), double-quoted (version: "2"), single-quoted (version: '2'), and any of the above with a trailing inline comment (e.g.version: 2 # required by Dependabot v2).updates:section is completely absent from the file.$groupsItemLineNumbervariable set at each named group item's declaration line; all four emission sites report that line instead of the parser's current position, giving actionable error locations.$inGroupsItem,$groupsItemHasPatterns,$groupsItemLineNumber) is now unconditionally reset whenever the parser leaves thegroups:block — not just on the error path — keeping parser state clean regardless of whether the previous group hadpatterns:.#comment lines insidegroups:blocks are explicitly skipped so they cannot trigger false DEP006 errors; named group item detection is tightened to^\s{6}[A-Za-z0-9_.~-]+\s*:.Write-Infois used in test helpers to log fixture paths when-VerboseOutputis active.return , $fileErrorsis documented: without it, PowerShell enumerates theList[string]in the pipeline and an empty list becomes$null, which throws underSet-StrictMode -Version Latestwhen.Countis accessed.exit ($totalErrors -gt 0 ? 1 : 0)is replaced with a PS5.1-compatibleif/else— required because the pre-commit hook invokes the script viapowershell(Windows PowerShell 5.1) as a fallback whenpwshis absent.Related Issue
Type of Change
Checklist