IONOS(ci): migrate build to cached matrix pipeline#226
IONOS(ci): migrate build to cached matrix pipeline#226printminion-co wants to merge 20 commits intoionos-devfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR refactors the hidrive-next-build GitHub Actions workflow into a cached, multi-job matrix pipeline that avoids rebuilding unchanged components by using JFrog + GitHub Actions cache signals computed up-front.
Changes:
- Introduces
prepare-matrixto compute cache keys, generate the apps matrix, and decide which apps to build vs restore. - Adds a cached
build-custom-npmsjob and a parallelbuild-appsmatrix job that uploads per-app artifacts to JFrog. - Updates
hidrive-next-buildto restore artifacts and build Nextcloud core only.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| .github/workflows/hidrive-next-build.yml | Replaces monolithic build with prepare/build/restore pipeline + JFrog/GHA caching and workflow_dispatch inputs. |
| .github/scripts/detect-app-cache.sh | New script to compute app SHAs and decide build vs restore across JFrog and GitHub cache. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| echo " Running: jf rt s \"$JFROG_PATH\"" | ||
| SEARCH_OUTPUT=$(jf rt s "$JFROG_PATH" 2>&1) | ||
| SEARCH_EXIT_CODE=$? | ||
|
|
||
| echo " Search exit code: $SEARCH_EXIT_CODE" |
There was a problem hiding this comment.
With set -e enabled, SEARCH_OUTPUT=$(jf rt s ... ) will cause the script to exit immediately when jf rt s returns a non-zero status, so the intended fallback handling (checking exit code / falling back to GitHub cache) won't run. Wrap the JFrog search in an if ...; then block or temporarily disable -e while capturing output and exit status.
| - name: Compute effective cache version | ||
| id: compute_cache_version | ||
| run: | | ||
| EFFECTIVE_VERSION="${{ env.CACHE_VERSION }}${{ github.event.inputs.cache_version_suffix && format('-{0}', github.event.inputs.cache_version_suffix) || '' }}" |
There was a problem hiding this comment.
cache_version_suffix is user-controlled (workflow_dispatch) and is concatenated into cache keys and JFrog paths. If it contains /, spaces, or shell-special characters it can generate invalid cache keys and unexpected JFrog paths/names. Consider validating/sanitizing this input (e.g., restrict to [A-Za-z0-9._-] and fail fast otherwise) before using it in EFFECTIVE_VERSION.
| EFFECTIVE_VERSION="${{ env.CACHE_VERSION }}${{ github.event.inputs.cache_version_suffix && format('-{0}', github.event.inputs.cache_version_suffix) || '' }}" | |
| CACHE_VERSION_SUFFIX="${{ github.event.inputs.cache_version_suffix || '' }}" | |
| if [ -n "$CACHE_VERSION_SUFFIX" ] && [[ ! "$CACHE_VERSION_SUFFIX" =~ ^[A-Za-z0-9._-]+$ ]]; then | |
| echo "Invalid cache_version_suffix: only [A-Za-z0-9._-] are allowed" >&2 | |
| exit 1 | |
| fi | |
| EFFECTIVE_VERSION="${{ env.CACHE_VERSION }}" | |
| if [ -n "$CACHE_VERSION_SUFFIX" ]; then | |
| EFFECTIVE_VERSION="${EFFECTIVE_VERSION}-${CACHE_VERSION_SUFFIX}" | |
| fi |
| if [ "$SOURCE" == "jfrog" ]; then | ||
| JFROG_PATH=$(echo "$app_json" | jq -r '.jfrog_path') | ||
| ARCHIVE_NAME="${APP_NAME}-${APP_SHA}.tar.gz" | ||
| echo "Downloading from JFrog: $JFROG_PATH" | ||
| jf rt download "$JFROG_PATH" "$ARCHIVE_NAME" --flat=true | ||
| mkdir -p "$(dirname "$APP_PATH")" | ||
| tar -xzf "$ARCHIVE_NAME" -C "$(dirname "$APP_PATH")" | ||
| rm -f "$ARCHIVE_NAME" |
There was a problem hiding this comment.
When restoring cached apps from JFrog, the workflow reconstructs the archive name as "${APP_NAME}-${APP_SHA}.tar.gz" using the full SHA. However uploads use an 8-char suffix (and sometimes a "-cnpms-" compound suffix), so this filename will not match the artifact stored in JFrog and restores will fail. Prefer passing/using the exact archive_name from detect-app-cache.sh (or deriving it from jfrog_path via basename) instead of rebuilding it from the full SHA.
| echo "❌ Cannot restore $APP_NAME from GitHub cache within a shell step." | ||
| echo " Cache key: $CACHE_KEY" | ||
| echo " GitHub Actions cache requires 'actions/cache/restore@v4' as a dedicated workflow step." | ||
| exit 1 |
There was a problem hiding this comment.
detect-app-cache.sh can mark cached apps with source: github-cache, but this job exits with an error for that case. As-is, any run without JFrog access that finds GitHub cache hits will fail during restore. Either implement GitHub cache restore via actions/cache/restore@v4 (likely as a matrix/loop of dedicated steps) or change the detection logic so github-cache entries are treated as needs build instead of apps_to_restore.
| echo "❌ Cannot restore $APP_NAME from GitHub cache within a shell step." | |
| echo " Cache key: $CACHE_KEY" | |
| echo " GitHub Actions cache requires 'actions/cache/restore@v4' as a dedicated workflow step." | |
| exit 1 | |
| echo "⚠️ Skipping $APP_NAME from GitHub cache in this shell-based restore step." | |
| echo " Cache key: $CACHE_KEY" | |
| echo " GitHub Actions cache requires 'actions/cache/restore@v4' as a dedicated workflow step." | |
| echo " Leaving this app to be handled by upstream cache-detection/build logic." | |
| continue |
| - name: Setup JFrog CLI | ||
| uses: jfrog/setup-jfrog-cli@7c95feb32008765e1b4e626b078dfd897c4340ad # v4.4.1 | ||
| env: | ||
| JF_URL: ${{ secrets.JF_ARTIFACTORY_URL }} | ||
| JF_USER: ${{ secrets.JF_ARTIFACTORY_USER }} | ||
| JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }} | ||
|
|
||
| - name: Restore custom-npms from JFrog | ||
| if: steps.app-config.outputs.needs-custom-npms == 'true' | ||
| run: | | ||
| CACHE_KEY="${{ needs.prepare-matrix.outputs.custom_npms_cache_key }}" | ||
| EFFECTIVE_VERSION="${{ needs.prepare-matrix.outputs.effective_cache_version }}" | ||
| ARCHIVE_NAME="custom-npms-${CACHE_KEY}.tar.gz" | ||
| JFROG_PATH="${{ env.ARTIFACTORY_REPOSITORY_SNAPSHOT }}/custom-npms/${EFFECTIVE_VERSION}/${ARCHIVE_NAME}" | ||
|
|
||
| echo "Downloading custom-npms from JFrog: $JFROG_PATH" | ||
| if jf rt download "$JFROG_PATH" "$ARCHIVE_NAME" --flat=true; then | ||
| tar -xzf "$ARCHIVE_NAME" | ||
| rm -f "$ARCHIVE_NAME" | ||
| echo "✅ Custom-npms restored from JFrog" | ||
| else | ||
| echo "❌ Failed to restore custom-npms from JFrog" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
build-apps always runs jfrog/setup-jfrog-cli and (for some apps) downloads custom-npms from JFrog. If secrets/JFrog are unavailable (e.g., forked PRs) or build-custom-npms ran but skipped JFrog upload, this will fail even though the custom-npms are available via the GitHub cache. Gate JFrog setup/download on credential availability and add a GitHub cache restore (or artifact download) path for custom-npms when JFrog isn't usable.
| - name: Restore custom-npms from JFrog | ||
| run: | | ||
| CACHE_KEY="${{ needs.prepare-matrix.outputs.custom_npms_cache_key }}" | ||
| EFFECTIVE_VERSION="${{ needs.prepare-matrix.outputs.effective_cache_version }}" | ||
| ARCHIVE_NAME="custom-npms-${CACHE_KEY}.tar.gz" | ||
| JFROG_PATH="${{ env.ARTIFACTORY_REPOSITORY_SNAPSHOT }}/custom-npms/${EFFECTIVE_VERSION}/${ARCHIVE_NAME}" | ||
|
|
||
| echo "Downloading custom-npms from JFrog: $JFROG_PATH" | ||
| jf rt download "$JFROG_PATH" "$ARCHIVE_NAME" --flat=true | ||
| tar -xzf "$ARCHIVE_NAME" | ||
| rm -f "$ARCHIVE_NAME" | ||
| echo "✅ Custom-npms restored" |
There was a problem hiding this comment.
This job always attempts to restore custom-npms from JFrog, but earlier steps explicitly allow skipping JFrog upload when credentials are missing. In that case, build-custom-npms may still have produced the dependency and saved it to the GitHub cache, yet this restore step will fail. Consider restoring from GitHub Actions cache when available (or downloading an artifact from build-custom-npms) and only using JFrog when credentials exist.
printminion-co
left a comment
There was a problem hiding this comment.
Iteration 1: requesting changes — two actionable bugs and three minor issues found.
Bugs (must fix):
-
.github/scripts/detect-app-cache.sh, lines 145–157 — incomplete JSON when app directory is missing or is not a git repo
When theAPP_PATHdirectory does not exist orgit rev-parse HEADreturns empty, the script does acontinueafter adding{name, sha: "unknown"}toapps_to_build— with noarchive_nameorjfrog_pathfields. Inhidrive-next-buildthe "Restore newly built apps" loop then does:ARCHIVE_NAME=$(echo "$app_json" | jq -r '.archive_name') # → "null" JFROG_PATH=$(echo "$app_json" | jq -r '.jfrog_path') # → "null" jf rt download "null" "null" --flat=true # obscure failure
The fix is to
exit 1immediately with a clear error message in these two early-exit branches instead of continuing, so the problem is surfaced atprepare-matrixrather than failing mysteriously inhidrive-next-build. -
.github/workflows/hidrive-next-build.yml,build-appsjob —actions/cache/saveruns before the JFrog upload
Step order is: build → compute cache key →actions/cache/save→Upload to JFrog. If the JFrog upload fails (network blip, quota, etc.) the job exits 1, but GitHub Actions cache already has the artifact. On the next rundetect-app-cache.shfinds the artifact in GitHub cache only (JFrog search misses), adds it toapps_to_restorewithsource: github-cache, andhidrive-next-buildexits 1 with "Cannot restore from GitHub cache within a shell step." The pipeline is stuck and requires a manualworkflow_dispatchwithforce_rebuild=trueto escape. Fix: move theactions/cache/savestep to run after the JFrog upload step, or make it conditional on JFrog upload success (e.g., combine into one step with proper error handling so both succeed or both are skipped).
Minor issues (should fix):
-
.github/scripts/detect-app-cache.sh, line 254 —CACHE_EXISTSis dead code
The variable is initialised to"false"on line 254 and set to"true"on line 267, but is never read after either assignment. Remove both assignments. -
.github/workflows/hidrive-next-build.yml, line 336 —build-appshas no JFrog credential guard
prepare-matrixandbuild-custom-npmsboth have an explicit "Check JFrog credentials" step that gates the JFrog setup.build-appscallsjfrog/setup-jfrog-cliunconditionally and the "Upload to JFrog" step also has no guard. This is inconsistent with the pattern established elsewhere in the same file. Even if JFrog is always available in this environment, add a credential guard (or an explicit comment explaining why the guard is intentionally omitted) for consistency and resilience. -
.github/workflows/hidrive-next-build.yml— job name typos- Line 733:
hidirve-next-artifact-to-ghcr_io→hidrive-next-artifact-to-ghcr_io - Line 790:
trigger-remote-dev-worflow→trigger-remote-dev-workflow
These become part of the public job ID referenced in status badges and API calls.
- Line 733:
printminion-co
left a comment
There was a problem hiding this comment.
Iteration 2 review — three bugs remain after the iteration 1 fixes.
1. hidrive-next-build — Restore custom-npms from JFrog runs unconditionally even when JFrog CLI is not installed
Setup JFrog CLI (~line 503) is correctly gated on steps.jfrog-creds.outputs.available == 'true', but the very next step Restore custom-npms from JFrog (~line 514) has no if: guard at all. When jfrog-creds is false, jf is not on PATH and the step fails with command not found, blocking the entire final build job.
Fix — add to the step:
if: steps.jfrog-creds.outputs.available == 'true'Note: the Restore all apps from JFrog and GitHub cache step also calls jf inside a shell loop (for has_apps_to_build path, ~line 574) without an outer if: guard on jfrog-creds. Same class of bug.
2. build-apps — Restore custom-npms from JFrog is not guarded by jfrog-creds
The step is correctly gated on steps.app-config.outputs.needs-custom-npms == 'true', but contains jf rt download with no check that JFrog CLI is actually installed. If jfrog-creds.outputs.available == 'false', jf is absent and this step fails.
Fix:
if: steps.app-config.outputs.needs-custom-npms == 'true' && steps.jfrog-creds.outputs.available == 'true'3. build-custom-npms — Save custom-npms to GitHub Actions cache still runs BEFORE JFrog upload
The analogous issue was fixed in build-apps (iteration 1), but the same ordering problem persists in build-custom-npms (PR HEAD lines ~222–282):
Save custom-npms to GitHub Actions cache ← ~line 222
Check JFrog credentials ← ~line 228
Upload custom-npms to JFrog ← ~line 252
If the JFrog upload fails, the GH Actions cache already has the artifact. On the next run hidrive-next-build exclusively downloads from JFrog — so Restore custom-npms from JFrog will keep failing because the JFrog artifact was never written. The GH cache hit is a false positive.
Fix: move Save custom-npms to GitHub Actions cache to after Upload custom-npms to JFrog, matching the ordering already applied in build-apps.
Not an issue — CNPMS_SHORT extraction is consistent
Both detect-app-cache.sh and the build-apps Compute app cache key step use ##*- then :0:8 on the same custom_npms_cache_key output. The extraction paths are identical and will always produce the same 8-char prefix.
printminion-co
left a comment
There was a problem hiding this comment.
Iteration 3 Review — Two Blocking Issues Remain
All prior fixes are confirmed in place. However two blocking issues remain that were not addressed:
Issue 1 (CRITICAL): Unguarded jf call in build-apps — "Upload to JFrog" step
The step "Upload ${{ matrix.app_info.name }} to JFrog" (immediately after "Compute app cache key") has no if: guard. It calls jf rt upload unconditionally. When JF_ARTIFACTORY_URL is not set, the setup-jfrog-cli step is skipped, so jf is not in PATH, and this step will fail with a "command not found" error on every matrix runner.
Fix: Add if: steps.jfrog-creds.outputs.available == 'true' to that step.
- name: Upload ${{ matrix.app_info.name }} to JFrog
if: steps.jfrog-creds.outputs.available == 'true'
run: |
...Issue 2 (CRITICAL): hidrive-next-build has no fallback and no explicit failure when JFrog is unavailable
When jfrog-creds is unavailable, both restore steps are silently skipped (not failed). The job then proceeds to make -f IONOS/Makefile build_nextcloud_only with no custom-npms and no app directories in place. This causes an opaque build failure deep in the Makefile rather than a clear, early error.
The guard step even logs "artifact restore steps will fail" — but they don't fail, they skip, misleading the operator.
This job has no GitHub Actions cache fallback for apps (unlike build-apps which at least has actions/cache/save). Since the entire restore strategy is JFrog-only, skipping JFrog silently is not safe.
Fix: Add an explicit gate step that fails fast when JFrog is unavailable (since there is no alternative restore path):
- name: Assert JFrog credentials are available (required for artifact restore)
if: steps.jfrog-creds.outputs.available != 'true'
run: |
echo "❌ JFrog credentials are required for hidrive-next-build artifact restore."
echo " Set JF_ARTIFACTORY_URL, JF_ARTIFACTORY_USER, and JF_ACCESS_TOKEN secrets."
exit 1Summary
| # | Severity | Location | Issue |
|---|---|---|---|
| 1 | Blocking | build-apps / Upload step |
Missing if: steps.jfrog-creds.outputs.available == 'true' guard on jf rt upload |
| 2 | Blocking | hidrive-next-build |
Silent skip when JFrog unavailable leads to opaque downstream failure; no fail-fast gate |
Everything else looks correct: CNPMS_SHORT extraction is consistent between detect-app-cache.sh and the "Compute app cache key" step, cache/save ordering is correct in both build-custom-npms and build-apps, and all previously requested fixes are confirmed.
printminion-co
left a comment
There was a problem hiding this comment.
Final review (iteration 4): two blocking issues remain.
1. github-cache source in hidrive-next-build always triggers exit 1 (pipeline hard-fail)
detect-app-cache.sh emits source: "github-cache" when JFrog is reachable but a specific app artifact is absent from JFrog yet present in GitHub Actions cache (the fallback path at lines 261–267 of detect-app-cache.sh). In hidrive-next-build → "Restore all apps from JFrog and GitHub cache" (lines 573–579), the elif [ "$SOURCE" == "github-cache" ] branch unconditionally calls exit 1.
This means: credentials present (assert gate passes) + at least one app cached only in GH cache → entire hidrive-next-build job fails. There is no actions/cache/restore@v4 step to handle the GH-cache path. Either:
- Add an
actions/cache/restore@v4step for each app withsource == "github-cache"(requires a dynamic step, which GitHub Actions does not support natively — would need a second shell script or composite action), OR - Change
detect-app-cache.shso that when JFrog is available and the artifact is NOT found in JFrog it is always queued for rebuild (added toapps_to_build, notapps_to_restorewithsource: github-cache). This is architecturally cleaner: if JFrog is the canonical artifact store forhidrive-next-build, treat a GH-cache-only hit as a cache miss from JFrog's perspective.
2. build-apps: no fallback to restore custom-npms from GitHub Actions cache when JFrog is unavailable
build-apps restores custom-npms only from JFrog (step guarded by steps.jfrog-creds.outputs.available == 'true'). When JFrog credentials are absent, the restore is silently skipped and the Build step runs against an empty custom-npms/ directory, causing a build failure for every app with needs_custom_npms: true.
build-custom-npms always saves to GitHub Actions cache at the end (lines 278–282). Add an actions/cache/restore@v4 step in build-apps guarded by steps.jfrog-creds.outputs.available != 'true' (or unconditionally as a fallback after the JFrog restore) to cover the no-JFrog path.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| - name: Check JFrog credentials | ||
| id: jfrog-creds | ||
| env: | ||
| JF_URL: ${{ secrets.JF_ARTIFACTORY_URL }} | ||
| run: | | ||
| if [ -n "$JF_URL" ]; then | ||
| echo "available=true" >> $GITHUB_OUTPUT | ||
| else | ||
| echo "available=false" >> $GITHUB_OUTPUT | ||
| echo "⚠ JFrog credentials not set — artifact restore steps will fail" | ||
| fi | ||
|
|
||
| - name: Assert JFrog credentials are available (required for artifact restore) | ||
| if: steps.jfrog-creds.outputs.available != 'true' | ||
| run: | | ||
| echo "❌ JFrog credentials are required for hidrive-next-build artifact restore." | ||
| echo " Set JF_ARTIFACTORY_URL, JF_ARTIFACTORY_USER, and JF_ACCESS_TOKEN secrets." | ||
| exit 1 |
There was a problem hiding this comment.
The JFrog availability check sets available=true when only JF_URL is present, but this job later requires JF_USER and JF_ACCESS_TOKEN too. Consider validating all required secrets up-front and failing with a clear message when any are missing (or skipping JFrog restore entirely if that’s an intended mode).
| elif [ "$SOURCE" == "github-cache" ]; then | ||
| CACHE_KEY=$(echo "$app_json" | jq -r '.cache_key') | ||
| echo "❌ Cannot restore $APP_NAME from GitHub cache within a shell step." | ||
| echo " Cache key: $CACHE_KEY" | ||
| echo " GitHub Actions cache requires 'actions/cache/restore@v4' as a dedicated workflow step." | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
apps_to_restore entries can have source: github-cache (from detect-app-cache.sh), but this step intentionally exits with an error in that case, which will break builds whenever a cache hit comes from GitHub cache instead of JFrog. Either implement actions/cache/restore@v4 restores for those cache keys, or change the detection logic to never emit github-cache when the final build requires JFrog-only restores.
| # Check if artifact exists in JFrog with verbose output | ||
| echo " Running: jf rt s \"$JFROG_PATH\"" | ||
| SEARCH_OUTPUT=$(jf rt s "$JFROG_PATH" 2>&1) | ||
| SEARCH_EXIT_CODE=$? | ||
|
|
There was a problem hiding this comment.
With set -e enabled, a failing jf rt s inside SEARCH_OUTPUT=$(...) will cause the script to exit immediately, so it won’t reach the fallback to the GitHub cache check. Wrap the JFrog search in an if ...; then / || true pattern (or temporarily disable set -e) so JFrog search failures are handled as intended.
| - name: Check JFrog credentials | ||
| id: jfrog-available | ||
| if: github.event.inputs.force_rebuild != 'true' | ||
| env: | ||
| JF_URL: ${{ secrets.JF_ARTIFACTORY_URL }} | ||
| run: | | ||
| if [ -n "$JF_URL" ]; then | ||
| echo "available=true" >> $GITHUB_OUTPUT | ||
| else | ||
| echo "available=false" >> $GITHUB_OUTPUT | ||
| fi | ||
|
|
||
| - name: Setup JFrog CLI | ||
| if: steps.jfrog-available.outputs.available == 'true' | ||
| uses: jfrog/setup-jfrog-cli@7c95feb32008765e1b4e626b078dfd897c4340ad # v4.4.1 | ||
| env: | ||
| JF_URL: ${{ secrets.JF_ARTIFACTORY_URL }} | ||
| JF_USER: ${{ secrets.JF_ARTIFACTORY_USER }} | ||
| JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }} |
There was a problem hiding this comment.
jfrog-available is set based only on JF_URL, but the subsequent jfrog/setup-jfrog-cli step requires JF_USER and JF_ACCESS_TOKEN too. Consider checking all three secrets and only marking JFrog as available when the full credential set is present, to avoid attempting JFrog setup with incomplete credentials.
| run: | | ||
| if [ -n "$JF_URL" ]; then | ||
| echo "available=true" >> $GITHUB_OUTPUT | ||
| else | ||
| echo "available=false" >> $GITHUB_OUTPUT | ||
| echo "⚠ JFrog credentials not set — JFrog upload will be skipped" |
There was a problem hiding this comment.
This JFrog credential gate only checks JF_URL, but the upload/ping steps require JF_USER and JF_ACCESS_TOKEN as well. Recommend treating JFrog as available only when all required secrets are set, otherwise skip setup/ping/upload cleanly.
| run: | | |
| if [ -n "$JF_URL" ]; then | |
| echo "available=true" >> $GITHUB_OUTPUT | |
| else | |
| echo "available=false" >> $GITHUB_OUTPUT | |
| echo "⚠ JFrog credentials not set — JFrog upload will be skipped" | |
| JF_USER: ${{ secrets.JF_ARTIFACTORY_USER }} | |
| JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }} | |
| run: | | |
| if [ -n "$JF_URL" ] && [ -n "$JF_USER" ] && [ -n "$JF_ACCESS_TOKEN" ]; then | |
| echo "available=true" >> $GITHUB_OUTPUT | |
| else | |
| echo "available=false" >> $GITHUB_OUTPUT | |
| echo "⚠ JFrog credentials incomplete — setup, ping, and upload will be skipped" |
| run: | | ||
| if [ -n "$JF_URL" ]; then | ||
| echo "available=true" >> $GITHUB_OUTPUT | ||
| else | ||
| echo "available=false" >> $GITHUB_OUTPUT | ||
| echo "⚠ JFrog credentials not set — JFrog steps will fail" |
There was a problem hiding this comment.
This JFrog credential check only looks at JF_URL, but later steps depend on JF_USER/JF_ACCESS_TOKEN too. It would be more robust to mark JFrog available only when the full credential set is present, otherwise skip JFrog restore/upload steps.
| run: | | |
| if [ -n "$JF_URL" ]; then | |
| echo "available=true" >> $GITHUB_OUTPUT | |
| else | |
| echo "available=false" >> $GITHUB_OUTPUT | |
| echo "⚠ JFrog credentials not set — JFrog steps will fail" | |
| JF_USER: ${{ secrets.JF_ARTIFACTORY_USER }} | |
| JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }} | |
| run: | | |
| if [ -n "$JF_URL" ] && [ -n "$JF_USER" ] && [ -n "$JF_ACCESS_TOKEN" ]; then | |
| echo "available=true" >> $GITHUB_OUTPUT | |
| else | |
| echo "available=false" >> $GITHUB_OUTPUT | |
| echo "⚠ Full JFrog credential set not available — skipping JFrog restore/upload steps" |
|
@copilot apply changes based on the comments in this thread |
Implemented the requested thread fixes in commits |
a1aa1b6 to
244d999
Compare
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Replace single monolithic build job with a four-job pipeline: - prepare-matrix: compute custom-npms cache key, check JFrog, generate 7-entry apps matrix via generate_apps_matrix_json, run detect-app-cache.sh - build-custom-npms: sequential build of all 4 custom-npm packages as a single cacheable unit; uploads combined tar.gz to JFrog - build-apps: parallel matrix (7 apps), restores custom-npms from JFrog when needs_custom_npms=true, uploads per-app tar.gz to JFrog - hidrive-next-build: restores custom-npms + all apps from JFrog/GitHub cache, then runs build_nextcloud_only (skips custom-npm rebuild) Downstream jobs (upload-to-artifactory, ghcr.io push, remote trigger) updated to depend on the new job DAG. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
When check_custom_npms runs jf config add jfrog-server in the same job before detect-app-cache.sh, the second config add fails with "Server ID 'jfrog-server' already exists". Check first and reuse. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
build-custom-npms checks out with submodules:false for speed but needs IONOS/Makefile to run build_custom_npms. Init only the IONOS submodule rather than all app submodules. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Manual git submodule update --init IONOS fails because the submodule URL is SSH (git@github.com:...) and the runner has no key. actions/checkout with submodules:true rewrites submodule URLs to HTTPS+GITHUB_TOKEN automatically. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Rollup resolves imports from nc-nextcloud-vue/dist/ using the real (non-symlink) path, so it looks for peer deps like @vueuse/core in nc-nextcloud-vue/node_modules/ rather than the consuming app's node_modules. The archive previously excluded all node_modules/, causing viewer and simplesettings builds to fail with "Rollup failed to resolve import @vueuse/core from nc-nextcloud-vue/dist/...". The other three packages (nc-mdi-svg, nc-mdi-js, nc-vue-material-design-icons) are consumed as file:…/dist (plain files) and do not trigger this issue. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
The v1.0 archive excluded nc-nextcloud-vue/node_modules; the new v1.1 archive includes it so rollup can resolve peer deps at build time. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Updates easystorage-config to 6f4643d which replaces the hardcoded
generate_apps_matrix_json echo statements with dynamic category lists and
normalizes all app build targets to build_${app}_app.
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
…on custom-npms change detect-app-cache.sh previously used full 40-char SHAs in all cache keys and JFrog paths. Switch to 8-char short SHAs throughout for readability. For apps with needs_custom_npms:true (simplesettings, viewer), append the first 8 chars of the custom-npms hash to the cache key: standard: v1.2-app-build-viewer-a1b2c3d4 with cnpms dep: v1.2-app-build-viewer-a1b2c3d4-cnpms-e5f6a7b8 This ensures a custom-npm submodule bump invalidates cached builds for apps that file:-depend on them, preventing stale artifact reuse. Bump CACHE_VERSION v1.1 → v1.2 to bust all existing caches (keys change). Add CUSTOM_NPMS_CACHE_KEY env to the detect step and a "Compute app cache key" step in build-apps so save/upload use the same compound key. Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
- define macros + dynamic rules (remove 5 hardcoded app build targets) - SPECIAL_BUILD_APPS list for nc_theming and nc-ionos-theme - .precheck target (working dir + jq validation) - COMPOSER_INSTALL / NPM_INSTALL / NPM_BUILD common variables - Enhanced help listing all 7 app targets by category - validate_app_list_uniqueness script + target Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
…rive-next-build - detect-app-cache.sh: include archive_name and jfrog_path in all apps_to_build JSON entries, using the same 8-char short-SHA suffix (and optional -cnpms-<hash> suffix) that build-apps uses when uploading - hidrive-next-build: read archive_name/jfrog_path from the JSON instead of reconstructing with the full 40-char SHA, which caused jf rt download to fail for every freshly-built app - replace invalid 'gh cache restore' shell call with a clear diagnostic exit; GitHub Actions cache requires actions/cache/restore@v4 steps and cannot be restored from within a run: script Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
… action - prepare-matrix: add Check JFrog credentials step (step-output guard) and jfrog/setup-jfrog-cli action (same pinned SHA as other jobs) before check_custom_npms; remove curl -fL https://install-cli.jfrog.io | sh, PATH export, and manual jf config add from the inline run: block - detect-app-cache.sh: remove curl | sh install; guard jf usage with command -v jf so the script degrades gracefully to GitHub-cache-only when jf is not in PATH instead of silently pulling an unpinned binary Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
… pipeline jobs - prepare-matrix: add explicit permissions block with actions:read so GH_TOKEN can call gh cache list (Actions cache API) via detect-app-cache.sh - build-custom-npms, hidrive-next-build: add Check JFrog credentials step that emits available=true/false from JF_URL env var; gate Setup JFrog CLI and Ping JFrog server on that output so missing secrets produce a clear diagnostic rather than a confusing action failure; also gate Upload custom-npms to JFrog so the build job degrades gracefully when JFrog is unavailable instead of failing after a successful build Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
- exit 1 with clear message when app dir missing or not a git repo - move actions/cache/save after JFrog upload in build-apps job - remove dead CACHE_EXISTS variable assignments - add JFrog credential guard and conditional setup-jfrog-cli in build-apps Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Agent-Logs-Url: https://github.com/IONOS-Productivity/nc-server/sessions/25427ab9-aedf-4687-ab49-8e77056e504f Co-authored-by: printminion-co <145785698+printminion-co@users.noreply.github.com> Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
Agent-Logs-Url: https://github.com/IONOS-Productivity/nc-server/sessions/25427ab9-aedf-4687-ab49-8e77056e504f Co-authored-by: printminion-co <145785698+printminion-co@users.noreply.github.com> Signed-off-by: Misha M.-Kupriyanov <kupriyanov@strato.de>
244d999 to
b18037e
Compare
Summary
Migrates the HiDrive Next CI build from a single monolithic job to a cached parallel matrix pipeline.
Pipeline architecture (4 jobs)
prepare-matrix— computes custom-npms cache key; generates 7-entry app matrix viaIONOS/Makefile generate_apps_matrix_json; runsdetect-app-cache.shper app to decide build vs. restorebuild-custom-npms— builds all 4 custom-npm packages as one cacheable unit; archives to JFrog + GitHub cachebuild-apps— parallel matrix (7 apps); restores custom-npms from JFrog forneeds_custom_npmsapps; uploads per-app tarballs to JFrog + GitHub cachehidrive-next-build— restores custom-npms + all 7 app artifacts; runsbuild_nextcloud_onlyCache correctness
detect-app-cache.shchecks JFrog + GitHub cache per app; skips rebuild when a valid artifact existsneeds_custom_npms: true(simplesettings, viewer) use a compound cache key{version}-app-build-{app}-{app_sha8}-cnpms-{cnpms_sha8}— their cache is busted when custom-npms change even if the app's own SHA is unchangedCACHE_VERSION: v1.2)nc-nextcloud-vue/node_modules/(required for rollup peer-dep resolution viafile:symlinks)New scripts / config
.github/scripts/detect-app-cache.sh— per-app cache check with JFrog + GitHub fallbackCACHE_VERSION: v1.2env var controls global cache bustIONOS submodule changes (see nc-config#109)
generate_apps_matrix_json— dynamicemit()loop over category lists → 7-entry sorted JSONdefinemacros + dynamic pattern rules (removes 5 hardcoded app build targets)SPECIAL_BUILD_APPS = nc_theming nc-ionos-theme.prechecktarget — validates working directory andjqavailabilityCOMPOSER_INSTALL/NPM_INSTALL/NPM_BUILDcommon variableshelp— lists all 7 app targets by categoryvalidate_app_list_uniquenessscript + targetRelated