Implements automated package update workflows for custom packages#196
Implements automated package update workflows for custom packages#196
Conversation
Creates foundation for automated package definition updates with: - GitHub workflows and scripts directory structure - update-package.py for GitHub API integration and hash calculation - test-package.sh for package installation testing - calculate-go-vendor-hash.py for Go module vendor hash calculation All scripts are executable and syntax-validated to support automated package management workflows. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Adds automated weekly VimR package updates with comprehensive testing and PR creation. The workflow runs every Monday at 10 AM UTC and includes: - GitHub release monitoring for qvacua/vimr repository - Package build verification and installation testing on macOS - Home manager configuration rebuild testing - Automated pull request creation with detailed update information This completes Phase 2 of the package update automation, providing a complete update pipeline for the VimR binary package with proper validation before creating update pull requests. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Implements Phases 3 & 4 of the automated package update system with three new workflows: - update-replicated.yml: Multi-job workflow for Replicated CLI with platform-specific vendor hash calculation - update-kots.yml: Similar workflow for KOTS CLI with cross-platform Go module vendor hash handling - update-sbctl.yml: Pre-built binary package workflow with platform-specific hash verification These workflows handle complex package patterns requiring: - Platform-specific vendorHash calculations on macOS and Linux for Go modules - Multi-job workflows with cross-platform testing and validation - Pre-built binary downloads with platform-specific URLs and hash verification - Automated PR creation with detailed change information and validation status 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Daily package health monitoring workflow with comprehensive testing - Manual bulk update orchestration with selective package support - Dependabot configuration for automated workflow maintenance - Proper error handling and detailed reporting for all operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix package name mapping in test-package.sh (corrected sbctl -> troubleshoot-sbctl mapping) - All five implementation phases completed with automated verification criteria met - Weekly automated updates implemented for vimr, replicated, kots, and sbctl packages - Daily monitoring and health checks configured - Manual bulk update capabilities available - Automated dependency maintenance via dependabot This completes the implementation of comprehensive GitHub workflows for package update automation. The workflows are production-ready with proper testing, error handling, and monitoring in place. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Prevents Python bytecode and cache files generated by package update scripts from being tracked in the repository: - __pycache__/ directories - *.py[cod] compiled Python files - *$py.class class files 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Changes all package update workflows to run every 4 hours instead of weekly: - update-vimr.yml: Every 4 hours at :00 minutes - update-replicated.yml: Every 4 hours at :15 minutes - update-kots.yml: Every 4 hours at :30 minutes - update-sbctl.yml: Every 4 hours at :45 minutes Each workflow has a 15-minute offset to spread out the load and avoid GitHub Actions resource conflicts. This provides much more frequent package updates while maintaining system stability. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Refactors update-package.py to automatically discover packages from pkgs/ directory - Adds package type detection (vimr-binary, go-module-simple, go-module-platform-specific, pre-built-binary, github-source) - Implements automatic GitHub repository extraction from package files - Updates test-package.sh to use dynamic package discovery for error messages - Modifies check-package-status.yml workflow to test all discovered packages - Maintains full backwards compatibility with existing package update functions - Successfully validates with imgpkg v0.46.1 and vimr v0.57.1 updates The system now supports any conformant package in pkgs/, eliminating hardcoded package lists and enabling seamless addition of new packages following standard patterns. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
@CodeRabbit full review |
✅ Actions performedFull review triggered. |
|
Warning Rate limit exceeded@crdant has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 22 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughThis PR establishes comprehensive CI/CD automation infrastructure for managing package updates and validation. It introduces Dependabot configuration, Python utility scripts for package discovery and hash calculation, a Bash testing framework, and GitHub Actions workflows for automated package updates with platform-specific builds and validation. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CI as GitHub Actions
participant CheckJob as check-update Job
participant HashJobs as Hash Calculation<br/>(Darwin + Linux)
participant UpdateJob as update-and-create-pr Job
participant GH as GitHub API
User->>CI: Trigger workflow (scheduled/manual)
CI->>CheckJob: Run check-update script
CheckJob->>GH: Query latest release
CheckJob-->>CI: Export: updated, versions, tag
alt If update detected
par
CI->>HashJobs: Start calculate-darwin-hash
CI->>HashJobs: Start calculate-linux-hash
and
HashJobs->>HashJobs: Clone repo, run go mod vendor
HashJobs->>HashJobs: Compute nix-hash (SRI format)
HashJobs-->>CI: Export: darwin-hash
HashJobs-->>CI: Export: linux-hash
end
CI->>UpdateJob: Run after hashes ready
UpdateJob->>UpdateJob: Update package file with hashes
UpdateJob->>UpdateJob: Build & test package
UpdateJob->>UpdateJob: Rebuild home-manager config
UpdateJob->>GH: Create PR with metadata
GH-->>User: PR created with versions & hashes
else No update available
CI-->>User: Workflow completes (no-op)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes The PR introduces significant new automation infrastructure with multiple new Python utility scripts containing dense logic for GitHub API integration, hash computations, and package type handling. Four separate GitHub Actions workflows orchestrate parallel hash calculations and conditional PR creation. The heterogeneity of changes (configuration files, shell scripts, Python modules with different concerns, multiple workflow variants) requires reasoning about each component individually, though some workflow patterns are repetitive. The core utility scripts ( Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (26)
home/modules/ai/config/claude/commands/implement.md (1)
28-35: Add language specifications to all fenced code blocks.Multiple fenced code blocks are missing language identifiers, which violates markdown linting standards (MD040). This impacts readability and syntax highlighting.
Apply language specifications to all code blocks:
- Lines 28–35: Add
```markdownbefore the default message block- Lines 79–86: Add
```markdownbefore the issue format block- Lines 142–146: Add
```bashbefore git-commiter task block- Lines 162–166: Add
```bashbefore agent task block- Lines 190–202: Add
```markdownbefore implementation plan template- Lines 262–266: Add
```bashbefore git-commiter final task block- Lines 276–286: Add
```markdownbefore PR description generation context- Lines 367–381: Add
```markdownbefore plan implementation example flow- Lines 384–401: Add
```markdownbefore issue implementation example flowAlso applies to: 79-86, 142-146, 162-166, 190-202, 262-266, 276-286, 367-381, 384-401
home/modules/ai/config/claude/commands/plan-gh.md (1)
15-27: Add language specifications to all fenced code blocks.Seven fenced code blocks are missing language identifiers (MD040 violations), impacting markdown compliance and syntax highlighting.
Apply language specifications:
- Lines 15–27: Add
```markdownbefore default message block- Lines 79–95: Add
```markdownbefore example findings block- Lines 134–152: Add
```markdownbefore issue breakdown options block- Lines 159–172: Add
```markdownbefore issue outline template- Lines 254–282: Add
```markdownbefore summary issue template- Lines 302–320: Add
```markdownbefore issue creation summary- Lines 437–462: Add
```markdownbefore example interaction flowAlso applies to: 79-95, 134-152, 159-172, 254-282, 302-320, 437-462
home/modules/ai/config/claude/commands/validate.md (1)
47-60: Add language specifications to all fenced code blocks.Seven fenced code blocks lack language identifiers (MD040 violations), affecting markdown compliance and syntax highlighting consistency.
Apply language specifications:
- Lines 47–60: Add
```bashbefore git log commands block- Lines 73–91: Add
```markdownbefore findings template- Lines 139–151: Add
```markdownbefore validation tasks block- Lines 319–327: Add
```markdownbefore passing validation message- Lines 330–342: Add
```markdownbefore partial validation message- Lines 345–355: Add
```markdownbefore failed validation message- Lines 407–422: Add
```markdownbefore example usage flowAlso applies to: 73-91, 139-151, 319-327, 330-342, 345-355, 407-422
home/modules/ai/config/claude/agents/codebase-analyzer.md (1)
53-101: Add language specification to fenced code block.The output format template (lines 53–101) is missing a language identifier. Add
```markdownbefore the opening backticks to comply with markdown linting standards (MD040).home/modules/ai/config/claude/agents/git-commiter.md (1)
128-137: Add language specifications to fenced code blocks.Two code blocks lack language identifiers (MD040 violations). Lines 128–137 (commit commands) should use
```bash, and lines 185–221 (example status output) should use```bashor```plaintext.Also applies to: 185-221
home/modules/ai/config/claude/agents/web-search-researcher.md (1)
67-89: Add language specification to fenced code block.The output format template (lines 67–89) is missing a language identifier. Add
```markdownbefore the opening backticks to comply with markdown linting standards (MD040).home/modules/ai/config/claude/agents/docs-analyzer.md (1)
75-75: Add language identifiers to fenced code blocks.Lines 75, 136, and 155 have code blocks missing language specifications. Add the appropriate language identifier (e.g.,
markdown,yaml) after the opening backticks to comply with Markdown linting standards.Also applies to: 136-136, 155-155
home/modules/ai/config/claude/agents/codebase-locator.md (1)
59-59: Add language identifier to fenced code block.Line 59 has a code block missing a language specification. Add the appropriate identifier (e.g.,
markdown,text) after the opening backticks.home/modules/ai/config/claude/agents/codebase-pattern-finder.md (2)
52-52: Add language identifiers to fenced code blocks.Lines 52 and 158 have code blocks missing language specifications. Add identifiers like
javascript,markdown, ortextafter the opening backticks to comply with linting standards.Also applies to: 158-158
40-40: Minor: Consider adjusting tone in agent guidance.Line 40 uses colloquial phrasing and multiple exclamation marks. While this casual tone may be intentional for approachability, consider whether more formal wording would better suit agent documentation. This is optional refinement.
home/modules/ai/config/claude/agents/docs-locator.md (1)
36-36: Add language identifiers to fenced code blocks.Lines 36 and 71 have code blocks missing language specifications. Add identifiers (e.g.,
markdown,text) after the opening backticks to comply with linting standards.Also applies to: 71-71
home/modules/ai/config/claude/commands/research_codebase.md (2)
8-8: Add language identifier to fenced code block.Line 8 has a code block missing a language specification. Add an identifier (e.g.,
text,markdown) after the opening backticks.
14-14: Minor: Simplify phrasing.Line 14 uses "Steps to follow after receiving" which could be simplified to "Steps to follow upon receiving" or just "Steps for receiving" for clarity. This is a stylistic suggestion.
home/modules/ai/config/claude/commands/plan-tdd.md (2)
15-15: Add language identifiers to fenced code blocks.Lines 15, 75, 127, 152, 330, and 453 have code blocks missing language specifications. Add appropriate identifiers (e.g.,
markdown,text,bash) after opening backticks to comply with Markdown linting standards.Also applies to: 75-75, 127-127, 152-152, 330-330, 453-453
108-108: Minor: Consider alternative wording.Line 108 uses "deeper investigation" which is noted as potentially weak. Consider rephrasing to something like "For detailed investigation" or "For thorough investigation" if desired, though this is stylistic.
home/modules/ai/config/claude/commands/plan.md (2)
15-15: Add language identifiers to fenced code blocks.Lines 15, 68, 122, 145, 276, and 422 have code blocks missing language specifications. Add appropriate identifiers (e.g.,
markdown,text,python) after opening backticks to comply with Markdown linting standards.Also applies to: 68-68, 122-122, 145-145, 276-276, 422-422
100-100: Minor: Consider alternative wording for stylistic improvement.Lines 100 and 346 use phrases flagged as weak ("deeper investigation" and "hard to automate"). Consider substituting with alternatives like "thorough investigation" and "difficult to automate" if you prefer stronger wording, though these are stylistic preferences.
Also applies to: 346-346
.github/scripts/update-package.py (4)
202-224: Consider moving the return statement to an else block.The function logic is correct, but for better code structure, the return at line 220 could be moved to an
elseblock after the error handling. This is a minor style improvement.Optional refactor:
version = version_match.group(1) # Extract build for VimR build_match = re.search(r'build\s*=\s*"([^"]+)"', content) build = build_match.group(1) if build_match else None - - return {"version": version, "build": build} except Exception as e: print(f"Failed to read {package_path}: {e}", file=sys.stderr) return None + else: + return {"version": version, "build": build}
300-375: LGTM! GitHub source package update logic is correct.The function properly handles generic GitHub source packages and appropriately notes that vendorHash updates are handled separately.
Minor: Line 361 has an unnecessary
fprefix on a string without placeholders.Optional cleanup for line 361:
- f.write(f"updated=true\n") + f.write("updated=true\n")
388-493: LGTM! sbctl update logic correctly handles multi-platform binaries.The function properly:
- Calculates hashes for both Darwin and Linux binaries
- Updates platform-specific hashes in the package file using appropriate regex patterns
- Exports both hashes to GitHub Actions outputs
Minor: Line 477 has an unnecessary
fprefix.Optional cleanup for line 477:
- f.write(f"updated=true\n") + f.write("updated=true\n")
495-543: LGTM! Main function orchestrates the update flow correctly.The entry point properly:
- Validates arguments and discovers available packages
- Checks API rate limits before starting
- Dispatches to the appropriate package-specific updater
- Handles the case where no update is needed
Minor: Line 534 has an unnecessary
fprefix.Optional cleanup for line 534:
- print(f"Supported types: vimr-binary, pre-built-binary, go-module-simple, go-module-platform-specific, github-source") + print("Supported types: vimr-binary, pre-built-binary, go-module-simple, go-module-platform-specific, github-source").github/workflows/update-vimr.yml (1)
39-41: Consider making the username filter configurable.The hardcoded
grep "crdant"reduces reusability. Consider using a workflow input or environment variable to make this more flexible for other users or configurations.For example:
env: HOME_CONFIG_FILTER: "crdant"Then use:
grep "$HOME_CONFIG_FILTER".github/workflows/update-sbctl.yml (1)
78-80: Consider making the username filter configurable.The hardcoded
grep "crdant"reduces reusability across different users or forks of this repository..github/scripts/calculate-go-vendor-hash.py (1)
14-66: Add input validation for version parameter.The
versionparameter is used directly in a git command without validation. While GitHub Actions workflows control the input in this case, adding validation would make the script more robust against accidental or malicious input.Add validation before using the version:
def calculate_vendor_hash(owner: str, repo: str, version: str, platform: str) -> str: """Calculate vendorHash for Go module on specific platform.""" # Validate version format (should be semver-like) import re if not re.match(r'^[\d]+\.[\d]+\.[\d]+(-[\w.]+)?$', version): raise ValueError(f"Invalid version format: {version}") print(f"Calculating vendor hash for {owner}/{repo} v{version} on {platform}") # ... rest of function.github/scripts/test-package.sh (1)
36-66: Extract repeated Nix expression to reduce duplication.The Nix shell expression
--impure --expr "let pkgs = import <nixpkgs> { overlays = [(import ./overlays { inputs = {}; }).additions]; }; in pkgs.$NIX_PACKAGE"is repeated 7 times. Extract it to a variable for better maintainability.Apply this refactor:
echo "Building package: $NIX_PACKAGE" +# Define reusable Nix expression +NIX_EXPR="--impure --expr \"let pkgs = import <nixpkgs> { overlays = [(import ./overlays { inputs = {}; }).additions]; }; in pkgs.$NIX_PACKAGE\"" + # Build package using overlays (since packages aren't directly exposed in flake) -if ! nix build --impure --expr "let pkgs = import <nixpkgs> { overlays = [(import ./overlays { inputs = {}; }).additions]; }; in pkgs.$NIX_PACKAGE" --no-link; then +if ! nix build $NIX_EXPR --no-link; then echo "❌ Failed to build $NIX_PACKAGE" >&2 exit 1 fi echo "✅ Package build successful" # Test in shell - check if binary is available echo "Testing binary availability: $BINARY_NAME" -if nix shell --impure --expr "let pkgs = import <nixpkgs> { overlays = [(import ./overlays { inputs = {}; }).additions]; }; in pkgs.$NIX_PACKAGE" --command which "$BINARY_NAME" >/dev/null 2>&1; then +if nix shell $NIX_EXPR --command which "$BINARY_NAME" >/dev/null 2>&1; then echo "✅ Binary $BINARY_NAME is available" # Try to get version info if possible echo "Attempting to get version info..." - if nix shell --impure --expr "let pkgs = import <nixpkgs> { overlays = [(import ./overlays { inputs = {}; }).additions]; }; in pkgs.$NIX_PACKAGE" --command "$BINARY_NAME" --version >/dev/null 2>&1; then + if nix shell $NIX_EXPR --command "$BINARY_NAME" --version >/dev/null 2>&1; thenContinue this pattern for the remaining occurrences.
.github/workflows/update-kots.yml (1)
1-151: Consider consolidating with update-replicated.yml to reduce duplication.This workflow is almost identical to update-replicated.yml. Consider creating a reusable workflow that accepts the package name as input to reduce maintenance burden.
Create
.github/workflows/update-go-package.yml:name: Update Go Package on: workflow_call: inputs: package-name: required: true type: string cron-offset: required: true type: stringThen call it from package-specific workflows:
name: Update KOTS CLI on: schedule: - cron: '30 */4 * * *' workflow_dispatch: jobs: update: uses: ./.github/workflows/update-go-package.yml with: package-name: kots cron-offset: '30'
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
.github/dependabot.yml(1 hunks).github/scripts/calculate-go-vendor-hash.py(1 hunks).github/scripts/test-package.sh(1 hunks).github/scripts/update-package.py(1 hunks).github/workflows/check-package-status.yml(1 hunks).github/workflows/update-all-packages.yml(1 hunks).github/workflows/update-kots.yml(1 hunks).github/workflows/update-replicated.yml(1 hunks).github/workflows/update-sbctl.yml(1 hunks).github/workflows/update-vimr.yml(1 hunks).gitignore(1 hunks)home/modules/ai/config/claude/agents/codebase-analyzer.md(1 hunks)home/modules/ai/config/claude/agents/codebase-locator.md(1 hunks)home/modules/ai/config/claude/agents/codebase-pattern-finder.md(1 hunks)home/modules/ai/config/claude/agents/docs-analyzer.md(1 hunks)home/modules/ai/config/claude/agents/docs-locator.md(1 hunks)home/modules/ai/config/claude/agents/git-commiter.md(1 hunks)home/modules/ai/config/claude/agents/pull-request-author.md(1 hunks)home/modules/ai/config/claude/agents/web-search-researcher.md(1 hunks)home/modules/ai/config/claude/commands/commit.md(1 hunks)home/modules/ai/config/claude/commands/do-todo-no-tdd.md(0 hunks)home/modules/ai/config/claude/commands/do-todo.md(0 hunks)home/modules/ai/config/claude/commands/gh-issue.md(0 hunks)home/modules/ai/config/claude/commands/implement.md(1 hunks)home/modules/ai/config/claude/commands/plan-gh.md(1 hunks)home/modules/ai/config/claude/commands/plan-tdd.md(1 hunks)home/modules/ai/config/claude/commands/plan.md(1 hunks)home/modules/ai/config/claude/commands/research_codebase.md(1 hunks)home/modules/ai/config/claude/commands/validate.md(1 hunks)home/modules/ai/default.nix(1 hunks)pkgs/imgpkg/default.nix(1 hunks)pkgs/vimr/default.nix(1 hunks)
💤 Files with no reviewable changes (3)
- home/modules/ai/config/claude/commands/do-todo-no-tdd.md
- home/modules/ai/config/claude/commands/gh-issue.md
- home/modules/ai/config/claude/commands/do-todo.md
🧰 Additional context used
🪛 actionlint (1.7.8)
.github/workflows/update-all-packages.yml
19-19: shellcheck reported issue in this script: SC2086:info:11:10: Double quote to prevent globbing and word splitting
(shellcheck)
19-19: shellcheck reported issue in this script: SC2086:info:19:30: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2129:style:3:1: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:3:39: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:4:12: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:5:57: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:6:12: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:12:121: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:15:137: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:18:126: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:21:123: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2129:style:26:1: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:26:12: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:27:26: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:28:83: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:29:67: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:30:46: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:31:12: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:32:15: Double quote to prevent globbing and word splitting
(shellcheck)
94-94: shellcheck reported issue in this script: SC2086:info:33:56: Double quote to prevent globbing and word splitting
(shellcheck)
.github/workflows/update-sbctl.yml
36-36: property "updated" is not defined in object type {}
(expression)
58-58: property "updated" is not defined in object type {}
(expression)
83-83: the runner of "peter-evans/create-pull-request@v5" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
86-86: property "new-version" is not defined in object type {}
(expression)
87-87: property "new-version" is not defined in object type {}
(expression)
88-88: property "old-version" is not defined in object type {}
(expression)
116-116: property "new-version" is not defined in object type {}
(expression)
.github/workflows/update-vimr.yml
45-45: the runner of "peter-evans/create-pull-request@v5" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
.github/workflows/update-replicated.yml
118-118: the runner of "peter-evans/create-pull-request@v5" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
.github/workflows/update-kots.yml
118-118: the runner of "peter-evans/create-pull-request@v5" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 LanguageTool
home/modules/ai/config/claude/commands/research_codebase.md
[style] ~14-~14: This phrase is redundant. Consider writing “follow”.
Context: ...the user's research query. ## Steps to follow after receiving the research query: 1. **Rea...
(FOLLOW_AFTER)
home/modules/ai/config/claude/agents/pull-request-author.md
[grammar] ~89-~89: Use a hyphen to join words.
Context: ...aracters - Present tense only - No first person references Process: 1. Ident...
(QB_NEW_EN_HYPHEN)
[uncategorized] ~183-~183: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ... Generate ONLY the pull request text in markdown: ```markdown [Title - 40 chars max] T...
(MARKDOWN_NNP)
home/modules/ai/config/claude/commands/plan.md
[style] ~100-~100: Consider a different adjective to strengthen your wording.
Context: ...nt for each type of research: For deeper investigation: - **codebase-locato...
(DEEP_PROFOUND)
[style] ~346-~346: To elevate your writing, try using a synonym here.
Context: ...eal conditions - Edge cases that are hard to automate - User acceptance criter...
(HARD_TO)
home/modules/ai/config/claude/agents/codebase-pattern-finder.md
[grammar] ~40-~40: Use a hyphen to join words.
Context: ...Step 2: Search! - You can use your handy dandy Grep, Glob, and LS tools to ...
(QB_NEW_EN_HYPHEN)
[style] ~40-~40: Using many exclamation marks might seem excessive (in this case: 3 exclamation marks for a text that’s 2155 characters long)
Context: ...u're looking for! You know how it's done! ### Step 3: Read and Extract - Read fi...
(EN_EXCESSIVE_EXCLAMATION)
home/modules/ai/config/claude/commands/plan-gh.md
[style] ~115-~115: Consider a different adjective to strengthen your wording.
Context: ...nt for each type of research: For deeper investigation: - **codebase-locato...
(DEEP_PROFOUND)
home/modules/ai/config/claude/agents/git-commiter.md
[style] ~104-~104: Consider shortening or rephrasing this to strengthen your wording.
Context: ...ntext) - ❌ "WIP" (not descriptive) - ❌ "Made changes to files" (obvious) ## Output Format Whe...
(MAKE_CHANGES)
home/modules/ai/config/claude/commands/plan-tdd.md
[style] ~108-~108: Consider a different adjective to strengthen your wording.
Context: ...nt for each type of research: For deeper investigation: - **codebase-locato...
(DEEP_PROFOUND)
home/modules/ai/config/claude/commands/implement.md
[grammar] ~141-~141: Ensure spelling is correct
Context: ...Initial test commit**: Use the git-commiter agent to create a clean commit: ``...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~161-~161: Ensure spelling is correct
Context: ...gnificant iteration**: Use the git-commiter agent for clean, atomic commits: `...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~261-~261: Ensure spelling is correct
Context: ... If changes remain, use the git-commiter agent: ``` Task: Create final c...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.18.1)
home/modules/ai/config/claude/commands/research_codebase.md
8-8: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/codebase-analyzer.md
53-53: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/commands/plan.md
15-15: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
68-68: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
122-122: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
145-145: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
276-276: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
422-422: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/docs-locator.md
36-36: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
71-71: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/codebase-locator.md
59-59: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/codebase-pattern-finder.md
52-52: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
158-158: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/commands/plan-gh.md
15-15: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
79-79: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
134-134: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
159-159: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
254-254: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
302-302: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
437-437: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/docs-analyzer.md
75-75: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
136-136: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
154-154: Multiple headings with the same content
(MD024, no-duplicate-heading)
155-155: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/git-commiter.md
138-138: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
194-194: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/agents/web-search-researcher.md
67-67: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/commands/validate.md
47-47: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
73-73: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
139-139: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
319-319: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
330-330: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
345-345: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
407-407: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/commands/plan-tdd.md
15-15: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
75-75: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
127-127: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
152-152: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
330-330: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
453-453: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
home/modules/ai/config/claude/commands/implement.md
28-28: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
79-79: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
142-142: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
162-162: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
190-190: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
262-262: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
276-276: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
367-367: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
384-384: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Ruff (0.14.0)
.github/scripts/calculate-go-vendor-hash.py
25-25: subprocess call: check for execution of untrusted input
(S603)
25-29: Starting a process with a partial executable path
(S607)
36-36: Avoid specifying long messages outside the exception class
(TRY003)
40-40: Starting a process with a partial executable path
(S607)
44-44: Starting a process with a partial executable path
(S607)
51-53: Starting a process with a partial executable path
(S607)
58-58: subprocess call: check for execution of untrusted input
(S603)
58-61: Starting a process with a partial executable path
(S607)
105-105: Do not catch blind exception: Exception
(BLE001)
.github/scripts/update-package.py
41-41: Probable use of requests call without timeout
(S113)
63-63: Probable use of requests call without timeout
(S113)
88-88: subprocess call: check for execution of untrusted input
(S603)
88-90: Starting a process with a partial executable path
(S607)
93-93: subprocess call: check for execution of untrusted input
(S603)
93-96: Starting a process with a partial executable path
(S607)
107-107: subprocess call: check for execution of untrusted input
(S603)
107-112: Starting a process with a partial executable path
(S607)
117-117: subprocess call: check for execution of untrusted input
(S603)
117-120: Starting a process with a partial executable path
(S607)
154-154: Do not catch blind exception: Exception
(BLE001)
220-220: Consider moving this statement to an else block
(TRY300)
222-222: Do not catch blind exception: Exception
(BLE001)
237-237: Do not catch blind exception: Exception
(BLE001)
264-264: Do not catch blind exception: Exception
(BLE001)
286-286: f-string without any placeholders
Remove extraneous f prefix
(F541)
296-296: Do not catch blind exception: Exception
(BLE001)
320-320: Do not catch blind exception: Exception
(BLE001)
338-338: Do not catch blind exception: Exception
(BLE001)
361-361: f-string without any placeholders
Remove extraneous f prefix
(F541)
373-373: Do not catch blind exception: Exception
(BLE001)
399-399: Do not catch blind exception: Exception
(BLE001)
420-420: subprocess call: check for execution of untrusted input
(S603)
420-422: Starting a process with a partial executable path
(S607)
424-424: subprocess call: check for execution of untrusted input
(S603)
424-427: Starting a process with a partial executable path
(S607)
432-432: subprocess call: check for execution of untrusted input
(S603)
432-434: Starting a process with a partial executable path
(S607)
436-436: subprocess call: check for execution of untrusted input
(S603)
436-439: Starting a process with a partial executable path
(S607)
477-477: f-string without any placeholders
Remove extraneous f prefix
(F541)
491-491: Do not catch blind exception: Exception
(BLE001)
534-534: f-string without any placeholders
Remove extraneous f prefix
(F541)
🔇 Additional comments (26)
home/modules/ai/default.nix (1)
29-31: LGTM – source directory and agent files verified.The changes correctly extend the Claude configuration workaround to include agent files. Verification confirms that
home/modules/ai/config/claude/agents/exists with 8 agent configuration files. The activation script will work as intended.home/modules/ai/config/claude/commands/commit.md (1)
1-38: Documentation is clear and well-structured.This brief guide provides clear, actionable steps for creating commits. The emphasis on safe, explicit file staging and atomic commits aligns well with the broader PR workflow defined in related documentation.
home/modules/ai/config/claude/agents/docs-analyzer.md (1)
1-176: Comprehensive and well-structured agent documentation.The docs-analyzer agent specification is thorough, with clear responsibilities, analysis strategy, output format, and practical guidelines. The example transformations effectively demonstrate the agent's expected behavior. This documentation provides good context for users invoking the agent.
home/modules/ai/config/claude/agents/codebase-locator.md (1)
1-103: Clear and focused agent specification.The codebase-locator documentation effectively defines the agent's role in file discovery and organization. The search strategy and output format are well-detailed with practical examples. Guidelines are explicit about scope boundaries (locations only, no analysis).
home/modules/ai/config/claude/agents/codebase-pattern-finder.md (1)
1-206: Well-detailed pattern-finding agent specification.The documentation effectively outlines how the agent should locate and extract reusable code patterns. The two main examples (offset and cursor-based pagination) clearly illustrate expected output. Guidelines about working code, multiple variations, and file:line references are practical and actionable.
home/modules/ai/config/claude/agents/docs-locator.md (1)
1-134: Comprehensive documentation discovery agent specification.The docs-locator documentation clearly defines the agent's role in finding documents across docs/, adrs/, and sessions/ directories. The directory structure example and output format template are practical. Guidelines effectively communicate the boundary between locating documents versus analyzing their contents.
home/modules/ai/config/claude/commands/research_codebase.md (1)
1-173: Well-structured research orchestration protocol.The research_codebase command documentation provides a clear, numbered workflow for conducting comprehensive codebase research via parallel sub-agents. The emphasis on reading files completely before spawning tasks (step 1) and waiting for all sub-agents to complete (step 4) establishes good discipline. The metadata gathering section with YAML frontmatter and detailed file naming conventions (with examples) is practical. The "Important notes" section effectively captures critical ordering and path-handling requirements.
home/modules/ai/config/claude/commands/plan-tdd.md (1)
1-469: Comprehensive TDD-focused plan workflow.The restructured plan-tdd command documentation now provides a robust, interactive workflow for TDD implementation planning. The emphasis on reading files completely before spawning research tasks is excellent discipline. The detailed template (lines 182–325) with explicit phases (Red, Green, Refactor, Integration), clear success criteria separation (Automated vs. Manual), and practical testing strategy sections align well with TDD principles. The example interaction flow at the end effectively demonstrates the workflow. The "Common TDD Patterns" and "Sub-task Spawning Best Practices" sections add valuable guidance.
home/modules/ai/config/claude/commands/plan.md (1)
1-435: Well-structured general implementation plan workflow.The restructured plan.md provides a comprehensive, interactive workflow for creating implementation plans. The re-emphasized requirement to read all mentioned files FULLY before spawning sub-tasks (Step 1) ensures proper context gathering. The parallel sub-task orchestration with multiple agent types (codebase-locator, codebase-analyzer, docs-locator, codebase-pattern-finder) is well-designed. The template now explicitly splits success criteria into "Automated Verification" and "Manual Verification" sections, which improves clarity for execution. The "Important Guidelines" section (especially item 6: "No Open Questions in Final Plan") establishes good discipline for complete, actionable plans.
pkgs/imgpkg/default.nix (1)
5-11: LGTM! Version bump looks good.The version update from 0.44.2 to 0.46.1 and the hash update are correct. The change from
sha256tohashattribute follows Nix best practices for SRI format hashes..gitignore (1)
14-17: LGTM! Appropriate Python cache exclusions.The Python cache patterns are standard and necessary given the new Python scripts added in this PR.
pkgs/vimr/default.nix (1)
5-10: LGTM! VimR version bump is correct.The version update to v0.57.1 with the corresponding build timestamp and hash update looks correct.
.github/dependabot.yml (1)
1-18: LGTM! Dependabot configuration is well-structured.The configuration appropriately manages GitHub Actions dependencies with reasonable PR limits and labeling. The weekly Tuesday schedule doesn't conflict with the 4-hourly package update workflows.
.github/scripts/update-package.py (7)
17-34: LGTM! GitHub authentication properly handled.The function correctly checks for authentication tokens and provides helpful feedback about rate limiting. Good practice to support both
GITHUB_TOKENandGITHUB_AUTH_TOKEN.
85-102: LGTM! Hash calculation is correct.The function properly uses Nix tools to calculate and convert hashes. The subprocess calls to
nix-prefetch-urlandnix hash convertare appropriate for this Nix-based automation context.
104-126: LGTM! Go source hash calculation is correct.The function correctly uses
nix-prefetch-gitto fetch and hash Git repositories, then converts to SRI format. The JSON parsing and error handling are appropriate.
128-158: LGTM! Package discovery logic is sound.The function correctly discovers packages in the
pkgs/directory and handles errors gracefully. The broad exception handling at line 154 is acceptable here since you want to continue discovering other packages even if one fails to analyze.
160-200: LGTM! Package analysis logic is comprehensive.The function correctly identifies package types and extracts metadata using appropriate regex patterns. The classification logic covers the expected package types mentioned in the PR (VimR, Go modules, pre-built binaries, GitHub sources).
226-298: LGTM! VimR update logic is well-structured.The function correctly:
- Parses VimR's version-build tag format
- Checks if an update is needed before downloading
- Calculates the new hash
- Updates the package file with proper regex substitutions
- Sets GitHub Actions outputs for workflow integration
377-386: LGTM! Legacy wrapper is appropriate.This backwards-compatible wrapper correctly delegates to the newer
update_github_source_packagefunction..github/workflows/update-all-packages.yml (2)
17-39: LGTM - Package determination logic is sound.The shell script correctly builds a JSON array from comma-separated input or defaults to all packages. The logic handles both the "all" case and custom package lists appropriately.
41-83: Well-structured workflow orchestration.The conditional triggering of individual package workflows based on the determined package list is implemented correctly. The use of
contains(fromJSON(...))ensures each package workflow is triggered only when needed..github/workflows/check-package-status.yml (1)
77-95: Solid Home Manager configuration validation.The approach of iterating through all home configurations and building each activation package is thorough and catches configuration issues early.
.github/scripts/calculate-go-vendor-hash.py (1)
68-107: Script logic is sound with appropriate error handling.The main function correctly maps package names to repositories, calculates vendor hashes, and writes GitHub Actions outputs. The broad exception handling is appropriate for a CI script context.
.github/scripts/test-package.sh (1)
68-92: Platform-specific validation is well-designed.The checks for VimR on macOS and cross-platform support for CLI tools provide good validation coverage and helpful warnings when packages are tested on unexpected platforms.
.github/workflows/update-replicated.yml (1)
87-104: Verify sed command compatibility across platforms.The sed command uses
-i.bakwhich works on both GNU and BSD sed. The regex pattern looks correct for updating the vendorHash values in the Nix expression.
….com:crdant/dotfiles into feature/crdant/adds-package-update-workflows
This commit enhances the reliability and maintainability of GitHub Actions workflows and Python scripts through several improvements: **Workflow Improvements:** - Add request timeouts to prevent hanging on network issues in update-package.py - Split matrix jobs into separate darwin/linux jobs in update-kots.yml and update-replicated.yml to fix output aggregation issues - Add comprehensive error handling for package list extraction in check-package-status.yml to fail fast on errors - Update peter-evans/create-pull-request action from v5 to v7 across all workflows for improved features and security **Script Enhancements:** - Add version parameter validation in calculate-go-vendor-hash.py to prevent command injection - Refactor test-package.sh to extract repeated Nix expression into reusable variable - Apply code review improvements to update-package.py (better error handling structure, remove unnecessary f-string prefixes) All changes improve error handling, reduce code duplication, and enhance security without affecting functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
home/modules/editor/default.nix (1)
1-157: Major inconsistency: PR objectives don't match file changes.The PR objectives describe adding GitHub Actions workflows for automated package updates (VimR, Replicated, KOTS, sbctl), but this file contains Neovim editor plugin additions (diffview-nvim and gitsigns-nvim). These are completely unrelated changes.
This suggests either:
- These changes were accidentally included in this PR
- The PR description is incorrect
- Multiple unrelated changes were bundled together
Please verify whether these editor configuration changes should be in a separate PR or if the PR objectives need updating.
♻️ Duplicate comments (2)
.github/scripts/update-package.py (1)
40-76: Timeouts added to requests calls — good.The timeouts on rate limit and release fetches address prior hang risk.
.github/workflows/update-sbctl.yml (1)
8-33: Critical: Missing job outputs definition (still unresolved).The
update-sbctljob lacks theoutputssection required by downstream jobs that referenceneeds.update-sbctl.outputs.updated,needs.update-sbctl.outputs.new-version,needs.update-sbctl.outputs.old-version,needs.update-sbctl.outputs.darwin-hash, andneeds.update-sbctl.outputs.linux-hash. Without this section, thetest-macosandcreate-pull-requestjobs will fail or skip incorrectly.Apply this diff to add the required outputs:
update-sbctl: runs-on: ubuntu-latest permissions: contents: write pull-requests: write + outputs: + updated: ${{ steps.check.outputs.updated }} + old-version: ${{ steps.check.outputs.old-version }} + new-version: ${{ steps.check.outputs.new-version }} + darwin-hash: ${{ steps.check.outputs.darwin-hash }} + linux-hash: ${{ steps.check.outputs.linux-hash }} steps:
🧹 Nitpick comments (11)
pkgs/helm-beta/default.nix (2)
16-16: Document why tests are disabled.Tests are disabled without explanation. For a beta version package, running tests would increase confidence in the build. Please either:
- Enable tests if possible, or
- Add a comment explaining why they must be disabled (e.g., "# Tests require network access" or "# Tests fail in sandboxed builds")
29-34: Consider adding error handling for completion generation.The shell completion commands run without error checking. If the helm binary doesn't support a completion format or fails, the build would continue silently. For a beta version, consider adding basic error handling:
postInstall = '' - $out/bin/helm completion bash > helm.bash - $out/bin/helm completion zsh > helm.zsh - $out/bin/helm completion fish > helm.fish + $out/bin/helm completion bash > helm.bash || echo "Warning: bash completion failed" + $out/bin/helm completion zsh > helm.zsh || echo "Warning: zsh completion failed" + $out/bin/helm completion fish > helm.fish || echo "Warning: fish completion failed" installShellCompletion helm.{bash,zsh,fish} '';.github/workflows/check-package-status.yml (3)
36-46: Prefer a machine‑readable list to fragile grep/cut chains.Add a “--list-json” mode to update-package.py that prints JSON array, then parse with jq. This avoids breakage if text changes.
Example follow‑up once the flag exists:
- available_packages=$(echo "$script_output" | grep "Available packages:" | cut -d: -f2 | sed 's/^[ ]*//g' | tr ',' '\n' | sed 's/^[ ]*//g' | sed 's/[ ]*$//g') + available_packages=$(python3 .github/scripts/update-package.py --list-json | jq -r '.[]')
90-92: Env var mismatch reduces API limits.update-package.py reads GITHUB_TOKEN/GITHUB_AUTH_TOKEN, but the job exports GH_TOKEN. Export GITHUB_TOKEN so authenticated requests are used.
- env: - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
105-141: Add strict bash options for early failure and safer loops.Enable pipefail and nounset at the top of the run block to catch errors earlier.
- echo "🏠 Testing home manager configuration builds..." + set -euo pipefail + echo "🏠 Testing home manager configuration builds...".github/scripts/update-package.py (3)
460-471: Regex for platform hashes assumes single‑line formatting.Make the sbctl substitutions robust across newlines/whitespace, or match explicitly with DOTALL.
- content = re.sub( - r'(sha256\s*=\s*if\s+isDarwin\s+then\s*)"[^"]+"\s*else', - f'\\1"{darwin_hash}"\n else', - content - ) + content = re.sub( + r'(sha256\s*=\s*if\s+isDarwin\s+then\s*)"[^\"]+"(\s*else)', + lambda m: f'{m.group(1)}"{darwin_hash}"{m.group(2)}', + content, + flags=re.S, + )And similarly for the Linux branch:
- content = re.sub( - r'(else\s*)"[^"]+";', - f'\\1"{linux_hash}";', - content - ) + content = re.sub(r'(else\s*)"[^\"]+";', lambda m: f'{m.group(1)}"{linux_hash}";', content, flags=re.S)
287-293: Tiny nit: remove unnecessary f‑string.f"updated=true" has no placeholders.
- f.write(f"updated=true\n") + f.write("updated=true\n")
158-160: Narrow overly broad exception catches.Catching Exception masks root causes. Prefer specific exceptions (OSError, JSONDecodeError, subprocess.CalledProcessError, requests.RequestException) per block.
Also applies to: 224-226, 241-244, 268-270, 300-302, 324-327, 342-344, 377-379, 403-405, 495-497
.github/workflows/update-vimr.yml (3)
18-22: Pin third‑party actions to a tag or commit SHA.Using @main is risky. Pin to a stable tag or SHA for reproducible builds.
- uses: DeterminateSystems/nix-installer-action@main + uses: DeterminateSystems/nix-installer-action@v13 - uses: DeterminateSystems/magic-nix-cache-action@main + uses: DeterminateSystems/magic-nix-cache-action@v4(Update to the latest known stable tags you use internally.)
24-29: Export GITHUB_TOKEN for the updater script.Ensure authenticated API calls to GitHub to avoid rate limiting.
- name: Check for VimR updates id: check + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | python3 .github/scripts/update-package.py vimr
35-41: Guard against missing home configuration.If grep finds no config, xargs may invoke nix with an empty name. Capture the config and check before building.
- nix eval --impure --json .#homeConfigurations --apply 'builtins.attrNames' | \ - jq -r '.[]' | grep "crdant" | head -1 | \ - xargs -I {} nix build ".#homeConfigurations.{}.activationPackage" --no-link + cfg=$(nix eval --impure --json .#homeConfigurations --apply 'builtins.attrNames' | jq -r '.[]' | grep -m1 '^crdant' || true) + if [ -z "${cfg:-}" ]; then + echo "ℹ️ No matching home configuration found; skipping rebuild." + else + nix build ".#homeConfigurations.${cfg}.activationPackage" --no-link + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
flake.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
.github/scripts/calculate-go-vendor-hash.py(1 hunks).github/scripts/test-package.sh(1 hunks).github/scripts/update-package.py(1 hunks).github/workflows/check-package-status.yml(1 hunks).github/workflows/update-kots.yml(1 hunks).github/workflows/update-replicated.yml(1 hunks).github/workflows/update-sbctl.yml(1 hunks).github/workflows/update-vimr.yml(1 hunks)home/modules/ai/default.nix(1 hunks)home/modules/editor/default.nix(3 hunks)home/modules/kubernetes/default.nix(1 hunks)home/modules/replicated/default.nix(1 hunks)overlays/default.nix(1 hunks)pkgs/default.nix(1 hunks)pkgs/helm-beta/default.nix(1 hunks)pkgs/llm/plugins/generated-plugins.nix(1 hunks)pkgs/llm/plugins/llm-plugins-lock.json(1 hunks)pkgs/llm/plugins/llm-plugins.json(1 hunks)pkgs/replicated/default.nix(1 hunks)pkgs/vimr/default.nix(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- pkgs/llm/plugins/llm-plugins.json
🚧 Files skipped from review as they are similar to previous changes (3)
- .github/workflows/update-replicated.yml
- home/modules/ai/default.nix
- pkgs/vimr/default.nix
🧰 Additional context used
🧬 Code graph analysis (2)
.github/scripts/update-package.py (1)
.github/scripts/calculate-go-vendor-hash.py (1)
main(73-112)
.github/scripts/calculate-go-vendor-hash.py (1)
.github/scripts/update-package.py (1)
main(499-546)
🪛 actionlint (1.7.8)
.github/workflows/update-sbctl.yml
36-36: property "updated" is not defined in object type {}
(expression)
58-58: property "updated" is not defined in object type {}
(expression)
86-86: property "new-version" is not defined in object type {}
(expression)
87-87: property "new-version" is not defined in object type {}
(expression)
88-88: property "old-version" is not defined in object type {}
(expression)
116-116: property "new-version" is not defined in object type {}
(expression)
🪛 Ruff (0.14.0)
.github/scripts/update-package.py
92-92: subprocess call: check for execution of untrusted input
(S603)
92-94: Starting a process with a partial executable path
(S607)
97-97: subprocess call: check for execution of untrusted input
(S603)
97-100: Starting a process with a partial executable path
(S607)
111-111: subprocess call: check for execution of untrusted input
(S603)
111-116: Starting a process with a partial executable path
(S607)
121-121: subprocess call: check for execution of untrusted input
(S603)
121-124: Starting a process with a partial executable path
(S607)
158-158: Do not catch blind exception: Exception
(BLE001)
224-224: Do not catch blind exception: Exception
(BLE001)
241-241: Do not catch blind exception: Exception
(BLE001)
268-268: Do not catch blind exception: Exception
(BLE001)
290-290: f-string without any placeholders
Remove extraneous f prefix
(F541)
300-300: Do not catch blind exception: Exception
(BLE001)
324-324: Do not catch blind exception: Exception
(BLE001)
342-342: Do not catch blind exception: Exception
(BLE001)
377-377: Do not catch blind exception: Exception
(BLE001)
403-403: Do not catch blind exception: Exception
(BLE001)
424-424: subprocess call: check for execution of untrusted input
(S603)
424-426: Starting a process with a partial executable path
(S607)
428-428: subprocess call: check for execution of untrusted input
(S603)
428-431: Starting a process with a partial executable path
(S607)
436-436: subprocess call: check for execution of untrusted input
(S603)
436-438: Starting a process with a partial executable path
(S607)
440-440: subprocess call: check for execution of untrusted input
(S603)
440-443: Starting a process with a partial executable path
(S607)
495-495: Do not catch blind exception: Exception
(BLE001)
.github/scripts/calculate-go-vendor-hash.py
20-20: Avoid specifying long messages outside the exception class
(TRY003)
30-30: subprocess call: check for execution of untrusted input
(S603)
30-34: Starting a process with a partial executable path
(S607)
41-41: Avoid specifying long messages outside the exception class
(TRY003)
45-45: Starting a process with a partial executable path
(S607)
49-49: Starting a process with a partial executable path
(S607)
56-58: Starting a process with a partial executable path
(S607)
63-63: subprocess call: check for execution of untrusted input
(S603)
63-66: Starting a process with a partial executable path
(S607)
110-110: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (25)
home/modules/replicated/default.nix (2)
12-12: Clarify if this change belongs in this PR.The addition of
unstable.oktetois not mentioned in the PR summary or objectives, which focus on automated package update workflows for VimR, Replicated, KOTS, and sbctl. Please confirm whether this change should be included in this PR or if it was accidentally committed.
13-17: Good practice: Platform-specific package placement.Moving
iterm2to the Darwin-specific packages list is correct, as iTerm2 is a macOS-only terminal emulator. This ensures it's only installed on compatible systems and prevents potential installation issues on Linux.home/modules/editor/default.nix (2)
53-55: Gitsigns setup looks good.The gitsigns setup follows the standard Neovim plugin pattern and will work with default configuration. You may want to customize signs, keybindings, or blame options later, but the defaults are sensible for initial usage.
21-21: Both plugins are available in nixpkgs.Both
diffview-nvimandgitsigns-nvimare available in nixpkgs as vimPlugins and can be referenced aspkgs.vimPlugins.diffview-nvimandpkgs.vimPlugins.gitsigns-nvim. The code in the PR uses the correct syntax.pkgs/llm/plugins/generated-plugins.nix (1)
114-125: AI summary error: change is to llm-cmd-comp, not llm-cmd.The AI summary incorrectly states the change was made to
llm-cmd, but the actual modification is tollm-cmd-compat line 124. The packagellm-cmd(line 111) still correctly includes both"prompt_toolkit"and"pygments".The change itself is consistent with the corresponding lockfile update, where
pygmentswas also removed fromllm-cmd-comp.pkgs/llm/plugins/llm-plugins-lock.json (1)
115-128: Confirm pygments removal intent and clarify PR scope.The change to remove
pygmentsfromllm-cmd-compappears intentional (part of commit #202 "Updates dependencies and tooling"), andllm-cmd-compis actively used in this repository—completion scripts are fetched from the GitHub repo inoverlays/llm/default.nix(lines 127–140).However:
Cannot verify removal safety: The lockfile is metadata only. Without access to the git diff or the external
llm-cmd-compsource code, it's unclear whetherpygmentsis actually unused or if this removal could break functionality.PR objective mismatch persists: The PR title and description reportedly focus on package update workflows for VimR, Replicated, KOTS, and sbctl—none related to LLM plugins. Confirm whether:
- These LLM plugin changes are intentional and bundled with other updates, or
- The PR scope has been misdescribed.
Please verify that
llm-cmd-compstill functions correctly withoutpygmentsand clarify the PR's intended scope.pkgs/helm-beta/default.nix (2)
36-42: LGTM: Package metadata is well-defined.The meta attributes appropriately describe this as a beta version and include all necessary information (description, homepage, license, mainProgram, platforms).
14-14: The vendorHash is not supported by the existing automation.The referenced automation script (
.github/scripts/calculate-go-vendor-hash.py) only supportsreplicatedandkotspackages. Helm-beta is not in therepo_mapand cannot be verified using this script. If helm-beta is a new package, the automation infrastructure would need to be updated to support it.Likely an incorrect or invalid review comment.
home/modules/kubernetes/default.nix (1)
11-11: Confirm the intention to use Helm 4.0 Beta in production.You're replacing the stable
kubernetes-helmwithhelm-beta(v4.0.0-beta.1). Beta versions may have stability issues or breaking changes. Please confirm this is intentional for your dotfiles environment, or consider:
- Keeping both packages available during the beta period
- Waiting for the stable Helm 4.0 release
pkgs/default.nix (1)
15-15: LGTM: Package registration follows standard pattern.The helm-beta package is correctly registered using the standard
pkgs.callPackagepattern, consistent with other custom packages in the repository.overlays/default.nix (1)
38-50: Confirm vendorHash matches tailscale v1.88.4 with Go 1.25.1.You bumped src.sha256 but kept vendorHash static. For buildGoModule, vendorHash usually changes per upstream rev and can also drift with toolchain changes. Please re‑derive and update if needed to avoid build failures.
pkgs/replicated/default.nix (1)
8-21: Version/hash bump looks consistent.Version, source hash, and vendorHash entries are aligned for 0.116.0. No issues spotted here.
.github/workflows/update-sbctl.yml (1)
82-83: Good: Action updated to v7.The
peter-evans/create-pull-requestaction has been correctly updated from v5 to v7, addressing the previous review comment..github/scripts/test-package.sh (4)
1-16: LGTM! Well-structured argument handling.The script correctly uses
set -euo pipefailfor safe error handling and dynamically discovers available packages fromupdate-package.pyfor consistency.
18-32: LGTM! Correct package name mapping.The mapping correctly handles special cases where the Nix package name or binary name differs from the package identifier.
34-69: LGTM! Robust build and binary verification.The script properly uses Nix overlays, handles build failures, verifies binary availability, and attempts multiple methods to extract version information with appropriate fallbacks.
71-97: LGTM! Appropriate platform-specific validations.The platform checks correctly validate that packages are tested on their supported platforms and provide helpful warnings for platform mismatches. The VimR app bundle verification is a useful additional validation step.
.github/workflows/update-kots.yml (2)
30-91: Good: Matrix aggregation issue resolved.The workflow correctly splits hash calculation into separate
calculate-darwin-hashandcalculate-linux-hashjobs, each running on the appropriate platform and producing distinct outputs. This addresses the previous review comment about matrix job output aggregation.
140-141: Good: Action updated to v7.The
peter-evans/create-pull-requestaction has been correctly updated from v5 to v7, addressing the previous review comment..github/scripts/calculate-go-vendor-hash.py (6)
1-16: LGTM! Well-documented with clear type hints.The module docstring and function signature with type hints provide good clarity for maintainers.
18-22: LGTM! Proper version validation.The regex validation ensures version format correctness before attempting to clone the repository, preventing unnecessary network calls with invalid versions.
24-41: LGTM! Safe repository cloning with validation.The temporary directory ensures cleanup,
check=Trueensures errors are caught, and thego.modexistence check validates the repository structure. The subprocess security warnings from static analysis are false positives here since the version is validated and owner/repo come from a controlled mapping.
43-50: LGTM! Correct Go module vendoring workflow.The sequence of
go mod downloadfollowed bygo mod vendoris the standard approach for creating a reproducible vendor directory.
52-71: LGTM! Correct Nix hash calculation.The two-step hash calculation (base32 then SRI conversion) follows Nix conventions and ensures compatibility with Nix package definitions.
73-115: LGTM! Proper error handling and GitHub Actions integration.The argument validation, package mapping, and GitHub Actions output writing are all correctly implemented. The broad exception catch at line 110 is appropriate for a main entry point to ensure clean error reporting and exit.
| { lib, buildGoModule, fetchFromGitHub, installShellFiles }: | ||
|
|
||
| buildGoModule rec { | ||
| pname = "helm-beta"; | ||
| version = "4.0.0-beta.1"; | ||
|
|
||
| src = fetchFromGitHub { | ||
| owner = "helm"; | ||
| repo = "helm"; | ||
| rev = "v${version}"; | ||
| sha256 = "sha256-M/le8jY7i+Nqd6bPB4tvvOSQj1TFwOYdUy8N6klOLG4="; | ||
| }; |
There was a problem hiding this comment.
Inconsistency: This change doesn't align with the stated PR objectives.
The PR summary describes "automated package update workflows for custom packages (VimR, Replicated, KOTS, sbctl)" but these files introduce a new Helm Beta package, which is not mentioned in the objectives. Please clarify whether:
- This change belongs in a different PR, or
- The PR objectives/summary should be updated to include Helm Beta
🤖 Prompt for AI Agents
In pkgs/helm-beta/default.nix lines 1 to 12, the added Helm Beta package is
inconsistent with the PR summary (which targets automated updates for VimR,
Replicated, KOTS, sbctl); either remove this Helm Beta change from this PR and
move it to its own PR, or update the PR title/description and checklist to
explicitly include adding the Helm Beta package so the diff matches the stated
objectives. Ensure the PR body and any metadata mention the Helm Beta addition
and rationale if you choose to keep it here.
| ldflags = [ | ||
| "-w" | ||
| "-s" | ||
| "-X helm.sh/helm/v3/internal/version.version=v${version}" | ||
| "-X helm.sh/helm/v3/internal/version.gitCommit=${src.rev}" | ||
| ]; |
There was a problem hiding this comment.
🧩 Analysis chain
Verify the ldflags import path for Helm 4.0.
The ldflags reference helm.sh/helm/v3/internal/version for Helm 4.0.0-beta.1. This seems inconsistent - please verify whether Helm 4.0 still uses the v3 import path or if it should be updated to v4.
🌐 Web query:
What is the correct internal version package import path for Helm 4.0.0-beta.1?
💡 Result:
The import path is:
helm.sh/helm/v4/internal/version. [1][2]
Sources:
[1] pkg.go.dev module helm.sh/helm/v4.
[2] Example internal packages under helm.sh/helm/v4 on pkg.go.dev (e.g. internal/chart/v3/util).
Update ldflags to use v4 import paths for Helm 4.0.0-beta.1.
The correct internal version package import path for Helm 4.0.0-beta.1 is helm.sh/helm/v4/internal/version, not v3. Lines 23-24 must be updated:
"-X helm.sh/helm/v4/internal/version.version=v${version}"
"-X helm.sh/helm/v4/internal/version.gitCommit=${src.rev}"
🤖 Prompt for AI Agents
In pkgs/helm-beta/default.nix around lines 20 to 25, the ldflags are using the
v3 internal version import path which is incorrect for Helm 4.0.0-beta.1; change
the two -X flags to use helm.sh/helm/v4/internal/version (i.e. replace
helm.sh/helm/v3/internal/version with helm.sh/helm/v4/internal/version for both
version and gitCommit flags) so the build embeds the correct v4 package symbols.
The update-sbctl job was emitting outputs from the check step but lacked a job-level outputs section, causing downstream jobs referencing needs.update-sbctl.outputs.* to fail. Added job-level outputs mapping: - updated: for conditional job execution - old-version: for PR body and commit messages - new-version: for PR body, commit messages, and branch naming - darwin-hash: for PR body - linux-hash: for PR body This ensures all outputs from the check step are properly exposed to downstream test-macos and create-pull-request jobs. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Instead of constructing the download URL from a string template, now query the release assets array to find the actual tarball. This makes the code resilient to: - Changes in asset naming conventions (e.g., VimR-v0.57.0 vs VimR-0.57.0) - Different compression formats (if they switch from tar.bz2) - Missing or renamed assets (fails gracefully with clear error message) The script now: 1. Searches release["assets"] for files matching VimR-*.tar.bz2 pattern 2. Uses the actual browser_download_url from the asset 3. Prints the found asset name for transparency 4. Returns None with clear error if no matching asset is found This prevents 404 errors during nix-prefetch-url and makes the update process more robust. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The calculate_go_source_hash function was incorrectly using nix-prefetch-git, which produces a git repository hash. However, fetchFromGitHub in Nix expects a tarball hash from the GitHub-generated archive. Changed implementation to: - Use nix-prefetch-url with --unpack on GitHub's archive URL - Fetch https://github.com/{owner}/{repo}/archive/{rev}.tar.gz - This produces the correct hash that fetchFromGitHub expects This fixes hash mismatches that would occur when Nix tries to fetch the source, as the git clone hash is different from the archive tarball hash. The function is used by update_github_source_package() which handles packages of type: go-module-simple, go-module-platform-specific, and github-source. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Changed check-package-status workflow to not immediately fail on non-zero exit codes from update-package.py, since the script returns exit code 1 when showing usage/help (which includes the package list). Key changes: - Capture script output and exit code but continue processing - Note the exit code with a warning instead of failing immediately - Attempt to parse packages from output regardless of exit code - Only fail if package list extraction/parsing actually fails - Include exit code in error messages for better diagnostics - Add success message showing how many packages were discovered This makes the workflow more robust - it focuses on whether we can extract a valid package list rather than strict exit code checking. The script still prints "Available packages:" even when exiting with code 1 for usage, and we should be able to parse that. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The regex replacement was always using "hash =" even when the original file
used "sha256 =". This could cause inconsistency in package files.
Changed the replacement to use a lambda function that captures the original
attribute name (sha256 or hash) and preserves it in the replacement:
Before:
re.sub(r'(sha256|hash)\s*=\s*"[^"]+";', f'hash = "{new_hash}";', content)
# Always outputs: hash = "..."
After:
re.sub(r'(sha256|hash)\s*=\s*"[^"]+";',
lambda m: f'{m.group(1)} = "{new_hash}";', content)
# Outputs: sha256 = "..." or hash = "..." depending on what was there
This ensures the update process doesn't change the attribute name style,
maintaining consistency with the original package file format.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
TL;DR ----- Establishes comprehensive GitHub Actions workflows and supporting scripts to automate package updates, testing, and dependency management across the dotfiles repository. Details -------- Transforms package maintenance from a manual, error-prone process into an automated system that proactively monitors for updates and ensures package quality. Previously, keeping custom Nix packages up-to-date meant manually checking upstream releases, calculating hashes on multiple platforms, and hoping nothing broke. This was particularly painful for packages like KOTS and Replicated CLI that need platform-specific vendor hashes. The new automation handles the entire update lifecycle: detecting new releases via GitHub API, calculating necessary hashes (including platform-specific ones), running build and installation tests, and creating pull requests with all changes ready for review. Daily health checks catch breakages early, before they impact development workflows. Python scripts provide the core logic for package discovery and updates, supporting everything from simple binaries to complex Go modules with platform-specific dependencies. The workflows orchestrate these scripts across macOS and Linux runners to ensure accurate hash calculations and comprehensive testing. Dependabot keeps the automation itself current with the latest GitHub Actions. This foundation makes adding new packages trivial—just drop them in the pkgs directory following existing patterns, and automation handles the rest. No more forgotten updates or broken builds discovered at the worst possible moment. --------- Co-authored-by: Claude <noreply@anthropic.com>
TL;DR
Establishes comprehensive GitHub Actions workflows and supporting scripts to automate package updates, testing, and dependency management across the dotfiles repository.
Details
Transforms package maintenance from a manual, error-prone process into an automated system that proactively monitors for updates and ensures package quality. Previously, keeping custom Nix packages up-to-date meant manually checking upstream releases, calculating hashes on multiple platforms, and hoping nothing broke. This was particularly painful for packages like KOTS and Replicated CLI that need platform-specific vendor hashes.
The new automation handles the entire update lifecycle: detecting new releases via GitHub API, calculating necessary hashes (including platform-specific ones), running build and installation tests, and creating pull requests with all changes ready for review. Daily health checks catch breakages early, before they impact development workflows.
Python scripts provide the core logic for package discovery and updates, supporting everything from simple binaries to complex Go modules with platform-specific dependencies. The workflows orchestrate these scripts across macOS and Linux runners to ensure accurate hash calculations and comprehensive testing. Dependabot keeps the automation itself current with the latest GitHub Actions.
This foundation makes adding new packages trivial—just drop them in the pkgs directory following existing patterns, and automation handles the rest. No more forgotten updates or broken builds discovered at the worst possible moment.
Summary by CodeRabbit