[MOSIP-44751] helmsman workflows always deploy java11 profile regardless of selected profile#191
[MOSIP-44751] helmsman workflows always deploy java11 profile regardless of selected profile#191bhumi46 wants to merge 30 commits intomosip:release-0.x.xfrom
Conversation
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
[MOSIP-44613],[MOSIP-44620],[MOSIP-44622],[MOSIP-44623],[MOSIP-44626],[MOSIP-44627],[MOSIP-44628]Updated profile based deployment with helmsman and terraform structural and workflow changes
…vemq as optional Signed-off-by: abhishek-1809 <abhisahu1920@gmail.com>
…vemq as optional Signed-off-by: abhishek-1809 <abhisahu1920@gmail.com>
…vemq as optional Signed-off-by: abhishek-1809 <abhisahu1920@gmail.com>
…vemq as optional Signed-off-by: abhishek-1809 <abhisahu1920@gmail.com>
[MOSIP-44608] updated terraform script to support ebs volume for acti…
…ss of selected profile Signed-off-by: bhumi46 <thisisbn46@gmail.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 4 minutes and 27 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughIntroduces profile-aware deployment: adds profile inputs and auto-detection to GitHub workflows, makes Terraform backend and tfvars profile-aware, adds eSignet DSFs and Helm hooks, and implements ActiveMQ EBS/NFS provisioning with Terraform/Ansible scripts and supporting Terraform variable wiring. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User / Git Push
participant Workflow as CI Workflow
participant ProfileDetect as Profile Detector
participant BackendScript as configure-backend.sh
participant Terraform as Terraform Runner
participant Helmsman as Helmsman (Helm)
participant K8s as Kubernetes Cluster
User->>Workflow: push / manual dispatch
Workflow->>ProfileDetect: use input or git-diff to detect profile
ProfileDetect-->>Workflow: return PROFILE
Workflow->>BackendScript: call with --profile PROFILE
BackendScript->>Terraform: produce profile-scoped backend/key
Workflow->>Terraform: plan/apply with TFVARS for PROFILE
Workflow->>Helmsman: deploy DSF at Helmsman/dsf/PROFILE/...
Helmsman->>K8s: install Helm releases and run hooks
K8s-->>Helmsman: releases healthy
Helmsman-->>Workflow: deployment complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/helmsman_external.yml (1)
270-281:⚠️ Potential issue | 🔴 CriticalPreserve dry-run when dispatching the downstream MOSIP workflow.
Line 281 hard-codes
mode":"apply", so a manual dry-run ofhelmsman_external.ymlstill launcheshelmsman_mosip.ymlin apply mode. Pass the requested mode through forworkflow_dispatchand only default toapplyon push.Suggested fix
- name: Trigger helmsman mosip workflow via API env: GITHUB_REPO: ${{ github.repository }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} BRANCH: ${{ github.ref_name }} + MODE: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.mode || 'apply' }} run: | curl -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $GITHUB_TOKEN" \ -H "X-GitHub-Api-Version: 2022-11-28" \ https://api.github.com/repos/$GITHUB_REPO/actions/workflows/helmsman_mosip.yml/dispatches \ - -d '{"ref":"'"$BRANCH"'","inputs":{"mode":"apply","profile":"'"${{ needs.set-matrix.outputs.profile }}"'"}}' + -d '{"ref":"'"$BRANCH"'","inputs":{"mode":"'"$MODE"'","profile":"'"${{ needs.set-matrix.outputs.profile }}"'"}}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/helmsman_external.yml around lines 270 - 281, The dispatch payload currently hardcodes "mode":"apply" when calling helmsman_mosip.yml; instead export the incoming workflow input into an env var (e.g., MODE: ${{ github.event.inputs.mode }}) and use shell parameter expansion in the curl -d JSON to pass that mode (e.g., '"mode":"'"${MODE:-apply}"'"') so the requested mode from helmsman_external.yml is forwarded and only defaults to "apply" when MODE is empty (push runs).
🧹 Nitpick comments (11)
Helmsman/hooks/esignet-1.7.1/redis-setup.sh (2)
36-38: Hardcoded Redis hostname assumes Bitnami Helm chart naming.The hostname
redis-master-0.redis-headless.redis.svc.cluster.localfollows Bitnami Redis chart conventions. If a different Redis chart or custom naming is used, this will break silently.Consider parameterizing via environment variable for flexibility:
💡 Optional: Parameterize Redis host
ESIGNET_NS="${ESIGNET_NS:-esignet}" REDIS_NS="redis" +REDIS_HOST="${REDIS_HOST:-redis-master-0.redis-headless.redis.svc.cluster.local}" +REDIS_PORT="${REDIS_PORT:-6379}" ... data: - redis-host: "redis-master-0.redis-headless.redis.svc.cluster.local" - redis-port: "6379" + redis-host: "${REDIS_HOST}" + redis-port: "${REDIS_PORT}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/hooks/esignet-1.7.1/redis-setup.sh` around lines 36 - 38, The values file hardcodes redis-host and redis-port which assume Bitnami naming; change these to use environment/templated variables (e.g. read REDIS_HOST and REDIS_PORT or Helm value placeholders) with sensible defaults so non-Bitnami charts work. Replace the literal entries for "redis-host" and "redis-port" with environment/templated substitutions and update the consuming script (redis-setup.sh) to fall back to defaults if REDIS_HOST/REDIS_PORT are unset, ensuring the deployment uses the env/values-provided host and port rather than the hardcoded "redis-master-0.redis-headless.redis.svc.cluster.local".
21-23: Redis wait has no fallback unlike other scripts.Other hooks in this PR use
|| echo "WARNING: ..."to continue on wait timeout. Here, a timeout will fail the entire post-install. This may be intentional since Redis is a hard dependency, but worth confirming the behavior is desired.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/hooks/esignet-1.7.1/redis-setup.sh` around lines 21 - 23, The kubectl wait call in the redis-setup.sh script currently will abort the post-install on timeout; to match other hooks add a non-fatal fallback by appending a failure handler (e.g., "|| echo 'WARNING: Redis pods not ready after timeout, continuing'") so the script continues on timeout, or explicitly document/keep the failing behavior if Redis must be hard-failed; update the kubectl invocation that references REDIS_NS and the pod label app.kubernetes.io/name=redis accordingly to use the chosen fallback behavior consistent with other hook scripts.Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh (1)
121-129: Potential command injection in jq if secret value contains special characters.The jq command on line 126 directly interpolates
$val(base64-encoded) and$keyinto the jq filter string. While$valis base64-encoded (safe), and$keycomes from a controlled associative array, consider using jq's--argfor safer variable interpolation.Proposed safer jq usage
if [[ -n "$val" ]]; then - kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets -o json | \ - jq ".data[\"$key\"]=\"$val\"" | \ + kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets -o json | \ + jq --arg k "$key" --arg v "$val" '.data[$k]=$v' | \ kubectl apply -f - fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh` around lines 121 - 129, The jq invocation interpolates shell variables directly and risks command injection; update the loop that iterates SECRET_KEYS (variables key and val) to pass key and val into jq via --arg (e.g., --arg k "$key" --arg v "$val") and use a programmatic filter like '.data[$k]=$v' instead of embedding "$key" and "$val" into the filter string, then pipe that output to kubectl apply; ensure you change the jq call in the block that reads val and updates keycloak-client-secrets to use --arg for both variables.Helmsman/hooks/esignet-1.7.1/esignet-preinstall.sh (1)
39-46: Consider edge case where namespace string appears in ConfigMap data.The
sedreplacements/namespace: $SOFTHSM_NS/namespace: $ESIGNET_NS/gperforms a literal string replacement. While this works correctly for standard Kubernetes YAML metadata, if the ConfigMap's data section happens to contain the exact stringnamespace: softhsm, it would also be replaced.This is a low-risk edge case for these specific ConfigMaps, but worth noting for future maintenance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/hooks/esignet-1.7.1/esignet-preinstall.sh` around lines 39 - 46, The sed replacement can accidentally change occurrences of "namespace: $SOFTHSM_NS" inside ConfigMap data; update the pipeline that reads and rewrites the esignet-softhsm-share ConfigMap (the kubectl -o yaml | sed ... | kubectl create -f - sequence manipulating variables SOFTHSM_NS and ESIGNET_NS) to only replace the metadata namespace line—e.g., constrain the substitution to lines starting with "namespace:" or only within the metadata block—so that "namespace: ..." inside data fields is untouched; implement this by changing the sed expression to match the start-of-line namespace key or by using a YAML-aware tool (yq) to set metadata.namespace before creating the new ConfigMap.terraform/modules/aws/aws-main.tf (1)
541-556: Consider removingpostgresql-setupfromdepends_onif PostgreSQL is optional.The
activemq-setupmodule hasmodule.postgresql-setupin itsdepends_on, butpostgresql-setupis conditionally instantiated withcount. When PostgreSQL setup is disabled (count = 0), this dependency reference still works but may be misleading about actual ordering requirements.If ActiveMQ setup genuinely requires PostgreSQL to be configured first, this is fine. Otherwise, consider removing it from the dependency list for clarity.
Suggested change if PostgreSQL is not a hard dependency
module "activemq-setup" { count = var.enable_activemq_setup && var.nginx_node_ebs_volume_size_3 > 0 ? 1 : 0 - depends_on = [module.aws-resource-creation, module.nginx-setup, module.rke2-setup, module.nfs-setup, module.postgresql-setup] + depends_on = [module.aws-resource-creation, module.nginx-setup, module.rke2-setup, module.nfs-setup] source = "./activemq-setup"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/modules/aws/aws-main.tf` around lines 541 - 556, The depends_on for module "activemq-setup" includes module.postgresql-setup even though that module is conditionally created; if PostgreSQL is not a hard dependency remove module.postgresql-setup from the depends_on list (or make the dependency conditional) so the dependency list reflects actual ordering requirements; locate the depends_on attribute on module "activemq-setup" and either delete the module.postgresql-setup entry or replace it with a conditional/ternary inclusion tied to the postgresql count/enable variable so the dependency is only present when the postgresql-setup module is instantiated.terraform/modules/aws/aws-resource-creation/variables.tf (1)
182-193: Consider enabling EBS encryption for ActiveMQ data volume.The new EBS volume for ActiveMQ has
encrypted = false, matching the existing volumes. For production deployments storing message queue data, consider enabling encryption at rest:device_name = "/dev/sdd" volume_size = var.NGINX_NODE_EBS_VOLUME_SIZE_3 volume_type = "gp3" delete_on_termination = true - encrypted = false + encrypted = trueThis would require consistent updates across all three EBS volume definitions if encryption is desired. If unencrypted volumes are intentional for this deployment model (e.g., performance reasons or encryption handled elsewhere), this can be deferred.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/modules/aws/aws-resource-creation/variables.tf` around lines 182 - 193, The ActiveMQ EBS block currently sets encrypted = false (in the conditional block gated by var.enable_activemq_setup and var.NGINX_NODE_EBS_VOLUME_SIZE_3) which leaves the volume unencrypted; change the encrypted flag to true (and if your environment requires a specific KMS key, add kms_key_id) in this block and make the same change for the other two NGINX/ActiveMQ EBS volume definitions so all three volumes use encrypted = true for consistent encryption-at-rest across ActiveMQ volumes (refer to the encrypted attribute, var.NGINX_NODE_EBS_VOLUME_SIZE_3, var.enable_activemq_setup and local.TAG_NAME.NGINX_TAG_NAME to locate the blocks).Helmsman/hooks/esignet-1.7.1/esignet-init-db.sh (1)
28-43: Consider using a more robust namespace replacement approach.The
sedreplacement pattern works for the current simple namespace names, but it could match unintended occurrences ifnamespace: postgresappears elsewhere in the YAML (e.g., in data values). For a more robust solution, consider usingyqto target only the metadata namespace field:kubectl -n "$POSTGRES_NS" get secret postgres-postgresql -o yaml | \ yq '.metadata.namespace = "'"$ESIGNET_NS"'"' | \ kubectl apply -f -However, given the controlled nature of these Kubernetes resource YAMLs (secrets/configmaps have predictable structure), the current approach is acceptable for this use case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/hooks/esignet-1.7.1/esignet-init-db.sh` around lines 28 - 43, The sed-based namespace replacement in the esignet-init-db.sh steps that pipe kubectl get secret/configmap (...) | sed "s/namespace: $POSTGRES_NS/namespace: $ESIGNET_NS/g" | kubectl apply -f - can accidentally replace occurrences outside metadata.namespace; update those three commands (the ones handling secret postgres-postgresql, secret db-common-secrets, and configmap postgres-config) to use yq to set .metadata.namespace = "$ESIGNET_NS" after fetching the resource with kubectl -n "$POSTGRES_NS" get ... -o yaml, then pipe to kubectl apply -f - so only the metadata.namespace field is changed while preserving all other YAML content.terraform/implementations/aws/infra/variables.tf (1)
243-253: Consider adding validation blocks for consistency.The
activemq_storage_deviceandactivemq_mount_pointvariables here lack the validation blocks that exist interraform/modules/aws/variables.tf. While the module-level validation will still catch invalid values, adding validation at the implementation level provides earlier feedback and consistency across the codebase.Proposed addition
variable "activemq_storage_device" { description = "Block device path of the 3rd EBS volume for ActiveMQ" type = string default = "/dev/nvme3n1" + + validation { + condition = can(regex("^/dev/.+$", var.activemq_storage_device)) + error_message = "activemq_storage_device must be a valid device path starting with /dev/." + } } variable "activemq_mount_point" { description = "Mount point for ActiveMQ persistent storage" type = string default = "/srv/activemq" + + validation { + condition = can(regex("^/([A-Za-z0-9._-]+)(/.*)?$", var.activemq_mount_point)) + error_message = "activemq_mount_point must be a valid absolute directory path." + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/implementations/aws/infra/variables.tf` around lines 243 - 253, Add validation blocks to the implementation-level Terraform variables activemq_storage_device and activemq_mount_point to mirror module-level checks: for activemq_storage_device (variable name activemq_storage_device) add a validation that the string is non-empty and matches a device path pattern (e.g., starts with "/dev/"), and for activemq_mount_point (variable name activemq_mount_point) add a validation that it is non-empty and is an absolute path (e.g., starts with "/"). Use the Terraform "validation" block with an appropriate condition and an explanatory error message for each variable to provide earlier feedback and keep consistency with terraform/modules/aws/variables.tf.Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml (1)
36-37: Multiple hardcoded placeholder values need customization.Several values in this DSF file contain placeholders or hardcoded domains that need to be updated for actual deployments:
- Lines 36-37:
<version>placeholder in Istio hook- Lines 45-46, 52:
<cluster-id>,<slack-channel-name>,<slack-api-url>,<env-name>placeholders- Line 88: Hardcoded
sandbox.xyz.netdomain forkibanaHostConsider parameterizing these values or documenting them clearly for users.
Also applies to: 52-52, 88-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml` around lines 36 - 37, The file contains hardcoded placeholders and domains in the DSF Helm/manifest entries—update postInstall and postUpgrade hooks (postInstall, postUpgrade) to accept a parameter for <version> instead of the literal placeholder, replace occurrences of <cluster-id>, <slack-channel-name>, <slack-api-url>, and <env-name> with templated/values-driven variables (or document them clearly) so they can be set via deployment values, and make kibanaHost configurable instead of the hardcoded sandbox.xyz.net; search for the symbols postInstall, postUpgrade, kibanaHost and the placeholder tokens (<version>, <cluster-id>, <slack-channel-name>, <slack-api-url>, <env-name>, sandbox.xyz.net) and refactor them to use values/template interpolation or add comments/defaults in the chart values.terraform/modules/aws/activemq-setup/activemq-setup.yml (1)
99-105: Permissive NFS export settings - verify security requirements.The NFS export configuration uses permissive options:
*- allows any host to mount the shareno_root_squash- remote root users have full root accessinsecure- allows connections from unprivileged portsWhile these settings may be acceptable for an internal Kubernetes cluster network, consider restricting access if tighter security is required:
More restrictive alternative
- name: Add ActiveMQ NFS export entry lineinfile: path: /etc/exports - line: "{{ activemq_mount_point }} *(rw,sync,no_root_squash,no_all_squash,insecure,subtree_check)" + # Consider restricting to cluster CIDR instead of * + line: "{{ activemq_mount_point }} {{ cluster_cidr | default('*') }}(rw,sync,no_root_squash,no_all_squash,insecure,subtree_check)" state: present create: yes backup: yes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/modules/aws/activemq-setup/activemq-setup.yml` around lines 99 - 105, The NFS export added by the Ansible task "Add ActiveMQ NFS export entry" uses overly permissive options; update the lineinfile entry that writes "{{ activemq_mount_point }} *(rw,sync,no_root_squash,no_all_squash,insecure,subtree_check)" to use a configurable, restricted client list and safer mount options: replace the "*" with a variable like "activemq_nfs_allowed_hosts" (or a specific CIDR), remove "no_root_squash" and "insecure" (or make them variables defaulting to false), and document these new variables so the export policy can be tightened per environment; ensure the change is applied in the same task that references activemq_mount_point so it remains idempotent.Helmsman/dsf/mosip-platform-java21/mosip-dsf.yaml (1)
215-232: Hardcoded domain values throughout the file.This DSF file contains numerous hardcoded
sandbox.xyz.netdomain references (e.g., lines 215, 231-232, 292, and many others). While this appears to be a template pattern, consider extracting the domain into a variable or configmap reference for easier environment customization.Example occurrences:
istio.hosts[0]: "smtp.sandbox.xyz.net"(line 215)db.host: "postgres.sandbox.xyz.net"(line 231)istio.corsPolicy.allowOrigins[0].exact: "https://admin.sandbox.xyz.net"(line 292)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/dsf/mosip-platform-java21/mosip-dsf.yaml` around lines 215 - 232, Replace hardcoded sandbox domain occurrences (e.g., istio.hosts[0], db.host, istio.corsPolicy.allowOrigins[0].exact under the masterdata-loader and other charts) with a single templated value rather than literal "sandbox.xyz.net": add a top-level Helm value like domain (or create a ConfigMap key) and change occurrences such as masterdata-loader.set.db.host, istio.hosts[0], and istio.corsPolicy.allowOrigins[0].exact to reference that value (e.g., {{ .Values.domain }} or a values.yaml entry) so environments can override it via values.yaml or --set; update values.yaml to include the new domain variable and adjust any chart templates/sets to consume it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/helmsman_esignet.yml:
- Around line 106-123: Quote the GitHub input expansions and the BEFORE/SHA
values when building CHANGED_FILES to avoid word-splitting (e.g., use "${{
github.event.before }}" and "${{ github.sha }}" in the git diff invocation) and
ensure the command-substitution result is safely handled; also add a small guard
that if github.event.before is the all-zero commit id
(0000000000000000000000000000000000000000) or empty, substitute a fallback
(e.g., HEAD~1) before running git diff so CHANGED_FILES and later PROFILE
extraction are robust; update references in the CHANGED_FILES assignment and any
places that use PROFILE/CHANGED_FILES to use quoted expansions.
In @.github/workflows/helmsman_external.yml:
- Around line 69-70: The current logic collapses multi-profile pushes by setting
PROFILE=$(echo "$changed_profiles" | head -1) and exporting a single profile;
instead preserve all changed profiles (e.g., keep the full newline- or
comma-separated changed_profiles string or convert it to a JSON list) and export
that as the job output (outputs.profile) so downstream steps and workflow-caller
can make decisions from the complete set; update any other identical occurrence
(the other instance noted around the second reference) to use the full
changed_profiles value rather than head -1.
In @.github/workflows/helmsman_mosip.yml:
- Around line 74-91: The script fails shellcheck by leaving variables unquoted
and can error under set -u because PROFILE is only written to GITHUB_ENV in one
branch; update the Set Profile block so the shell variable PROFILE is assigned
in both branches before exporting to GITHUB_ENV (ensure CHANGED_FILES is quoted
when used), quote "$GITHUB_ENV" when appending, quote all variable expansions
like "$PROFILE" and "$CHANGED_FILES" to prevent word splitting, and make the
grep more robust by escaping the dot in the pattern (use mosip-dsf\.yaml);
finally use echo "Using profile: $PROFILE" (with PROFILE guaranteed set) to
avoid unset-variable failures.
In @.github/workflows/helmsman_testrigs.yml:
- Around line 74-91: Quote all shell variable expansions to avoid word-splitting
and globbing (e.g., use "$CHANGED_FILES", "$PROFILE" and "$GITHUB_ENV" where
used), and ensure PROFILE is assigned in the current shell before printing:
capture CHANGED_FILES from the git diff as before, derive PROFILE into a local
shell variable (and export it or set it and then append "PROFILE=$PROFILE" to
GITHUB_ENV), use the quoted variable in the [[ -z "$PROFILE" ]] check and in the
final echo ("Using profile: $PROFILE") so the echo prints the profile
immediately; references: CHANGED_FILES, PROFILE, GITHUB_ENV, the git diff
assignment and the sed/grep extraction lines.
In `@Helmsman/dsf/esignet/esignet-dsf.yaml`:
- Around line 66-67: The OIDC UI config is pointing to cluster-local HTTP
endpoints; update oidc_ui.configmaps.oidc-ui.REACT_APP_API_BASE_URL and
oidc_ui.configmaps.oidc-ui.REACT_APP_SBI_DOMAIN_URI to use the public host
(e.g., https://esignet.sandbox.xyz.net/v1/esignet or
https://esignet.sandbox.xyz.net) or a relative path so the browser uses the same
origin as the UI and avoids mixed-content and DNS resolution issues.
In `@Helmsman/dsf/esignet/external-dsf.yaml`:
- Around line 53-58: This profile currently enables two PostgreSQL provisioning
paths: the Helm chart represented by the postgres block (postgres.enabled) and
the infra flag enable_postgresql_setup, which can cause postgres-init-esignet to
target the wrong backend; pick one approach and change the other to
disabled—either set postgres.enabled to false in the eSignet standalone profile
so the infra-managed DB (enable_postgresql_setup) is used, or set
enable_postgresql_setup to false in the profile's terraform vars so the charted
postgres is authoritative; if you choose the external/infra-managed route,
ensure secrets/configmap are provided (via generate-secret-cm.py or manual
creation) so postgres-init-esignet can find postgres.sandbox.xyz.net:5433.
In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml`:
- Line 106: The valuesFile entry currently references a misspelled filename
"rancher-loggig-values.yaml"; update the valuesFile value to
"rancher-logging-values.yaml" so the valuesFile key points to the correct Helm
values file (edit the valuesFile line in the prereq-dsf.yaml where valuesFile is
set).
In `@Helmsman/dsf/mosip-platform-java21/esignet-dsf.yaml`:
- Around line 221-238: The oidc-ui chart's ConfigMap reference extraEnvVarsCM[2]
is inconsistent (uses "artifactory-share") compared to other apps; change the
value for oidc-ui's extraEnvVarsCM[2] from "artifactory-share" to
"artifactory-1202-share" so it uses the same artifactory ConfigMap as esignet
and mock-identity-system (update the oidc-ui block in esignet-dsf.yaml).
In `@Helmsman/dsf/mosip-platform-java21/prereq-dsf.yaml`:
- Around line 88-89: The kibanaHost value is hardcoded
("kibana.sandbox.xyz.net"); change it to a parameterized value so environments
can set the domain at deploy time. Replace the literal kibanaHost with a
templated/variable reference (e.g., a Helm/templating value or an
environment-specific var) and update any callers or values files to provide the
domain; ensure the key name kibanaHost remains the same so installName
"elasticsearch-kibana" continues to reference it. Also add a sensible default in
the values file or documentation for the required domain variable.
- Around line 45-46: Replace the literal "<cluster-id>" placeholders for the two
keys grafana.global.cattle.clusterId and global.cattle.clusterId with the real
cluster identifier (or an environment-variable reference supported by your
deploy system, e.g., $CLUSTER_ID or ${CLUSTER_ID}); update both keys to use the
same concrete value or env var so the deployment receives the correct cluster ID
and document the substitution method if automation isn't available.
- Around line 36-37: The Helm hooks postInstall and postUpgrade currently
include an unresolved placeholder "<version>" in the command invoking
install-istio-and-httpbin.sh; replace the placeholder with the actual
chart/istio/httpbin version or change the line to use a documented environment
variable or Helm template value (e.g. use a values.yaml key or {{ .Values.foo }}
expansion) so the hook becomes deterministic at deploy time, and add a short
note in the chart README or values.yaml documenting the required substitution
for install-istio-and-httpbin.sh.
In `@Helmsman/dsf/mosip-platform-java21/testrigs-dsf.yaml`:
- Line 62: The Slack webhook URL is hardcoded under
apitestrig.configmaps.apitestrig.slack-webhook-url—remove the literal and
instead read it from a Kubernetes Secret (e.g., create a Secret containing the
webhook key and use a secretKeyRef or envFrom in the apitestrig
deployment/manifest). Update the testrigs-dsf.yaml entry to reference that
Secret (or an environment variable sourced from the Secret) rather than
embedding the URL, and ensure any templates or Helm values use the Secret name
and key instead of the plain string.
In `@Helmsman/hooks/esignet-1.7.1/captcha-postinstall.sh`:
- Around line 45-51: The JSON patch assumes the container already has an env
array; make the script defensively ensure env exists before trying to append the
secret env var: check the same deployment (using CAPTCHA_NS and ENV_VAR_EXISTS
logic) for the presence of /spec/template/spec/containers/0/env and if missing
first run a kubectl patch that adds an empty env array (e.g. op add path
/spec/template/spec/containers/0/env value []), then run the existing kubectl
patch that appends MOSIP_CAPTCHA_GOOGLERECAPTCHAV2_SECRET_ESIGNET; update the
snippet around the kubectl patch invocation so the script creates the env array
when absent and only then runs the add-"-" patch.
In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh`:
- Around line 108-113: The helm install command uses the array expansion
"${HELM_SET_SECRETS[@]}", which can trigger unbound-variable errors under set -u
when the HELM_SET_SECRETS array is empty; update the invocation in the helm
install block (the command that installs esignet-keycloak-init) to safely handle
an empty array by either guarding with an explicit length check (if [
"${`#HELM_SET_SECRETS`[@]}" -gt 0 ]; then ... include expansion ...; fi) or using
the safe expansion form ${HELM_SET_SECRETS[@]+"${HELM_SET_SECRETS[@]}"} so the
HELM_SET_SECRETS expansion is omitted when empty, avoiding failures during
install.
In `@Helmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-postinstall.sh`:
- Around line 21-23: The kubectl configmap creation uses echo "$MOCK_HSM_PIN"
piped to base64 -d which can introduce a trailing newline; change the pipeline
that builds the literal (the line invoking kubectl -n esignet create configmap
softhsm-mock-identity-system-share --from-literal=softhsm-pin="$(echo
"$MOCK_HSM_PIN" | base64 -d)" ...) to use echo -n or printf '%s' (e.g., replace
echo "$MOCK_HSM_PIN" with echo -n "$MOCK_HSM_PIN" or printf '%s'
"$MOCK_HSM_PIN") so the decoded PIN has no trailing newline when stored from
MOCK_HSM_PIN.
In `@terraform/implementations/aws/infra/profiles/esignet/aws.tfvars`:
- Around line 79-82: The profile is missing a DNS record for the MinIO host: add
"minio" to the appropriate subdomain list (update the subdomain_public array
that currently contains subdomain_public = ["esignet", "signup"]) so Terraform
creates a DNS entry for minio.sandbox.xyz.net; alternatively, if you prefer not
to expose MinIO here, stop publishing that host from
Helmsman/dsf/esignet/external-dsf.yaml instead of changing subdomain_public.
In `@terraform/modules/aws/activemq-setup/activemq-setup.sh`:
- Around line 112-123: After the ansible success branch (after checking
ANSIBLE_EXIT and before printing the final "Done" lines), verify that the
StorageClass manifest actually exists at /tmp/activemq-storageclass.yaml; if the
file is missing or empty, print an error (including context like ANSIBLE_LOG
tail or ANSIBLE_EXIT) and exit non‑zero so Terraform doesn't proceed. Use the
same logging style as the existing error block (referencing ANSIBLE_LOG and tail
-30) and keep the existing success messages that reference ACTIVEMQ_MOUNT_POINT
and NGINX_PRIVATE_IP only when the file check passes.
In `@terraform/modules/aws/activemq-setup/main.tf`:
- Around line 119-133: The remote-exec inline block sets KUBECONFIG and prints
it but uses single quotes which prevent expansion and uses an arbitrary
find+head selection; update the inline commands so the echo uses double quotes
(change "echo 'Using kubeconfig: $KUBECONFIG'" to use double quotes) and make
kubeconfig discovery deterministic: when building KUBECONFIG (the find
/home/${var.CONTROL_PLANE_USER}/.kube/ ... command) restrict the pattern or
depth (e.g., search for files named "config" or "config*.yaml"), or add logic to
check the number of matches and fail with a clear error if multiple candidates
are found (i.e., count matches and exit 1 if >1) so that the remote-exec block
consistently selects a single kubeconfig.
---
Outside diff comments:
In @.github/workflows/helmsman_external.yml:
- Around line 270-281: The dispatch payload currently hardcodes "mode":"apply"
when calling helmsman_mosip.yml; instead export the incoming workflow input into
an env var (e.g., MODE: ${{ github.event.inputs.mode }}) and use shell parameter
expansion in the curl -d JSON to pass that mode (e.g.,
'"mode":"'"${MODE:-apply}"'"') so the requested mode from helmsman_external.yml
is forwarded and only defaults to "apply" when MODE is empty (push runs).
---
Nitpick comments:
In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml`:
- Around line 36-37: The file contains hardcoded placeholders and domains in the
DSF Helm/manifest entries—update postInstall and postUpgrade hooks (postInstall,
postUpgrade) to accept a parameter for <version> instead of the literal
placeholder, replace occurrences of <cluster-id>, <slack-channel-name>,
<slack-api-url>, and <env-name> with templated/values-driven variables (or
document them clearly) so they can be set via deployment values, and make
kibanaHost configurable instead of the hardcoded sandbox.xyz.net; search for the
symbols postInstall, postUpgrade, kibanaHost and the placeholder tokens
(<version>, <cluster-id>, <slack-channel-name>, <slack-api-url>, <env-name>,
sandbox.xyz.net) and refactor them to use values/template interpolation or add
comments/defaults in the chart values.
In `@Helmsman/dsf/mosip-platform-java21/mosip-dsf.yaml`:
- Around line 215-232: Replace hardcoded sandbox domain occurrences (e.g.,
istio.hosts[0], db.host, istio.corsPolicy.allowOrigins[0].exact under the
masterdata-loader and other charts) with a single templated value rather than
literal "sandbox.xyz.net": add a top-level Helm value like domain (or create a
ConfigMap key) and change occurrences such as masterdata-loader.set.db.host,
istio.hosts[0], and istio.corsPolicy.allowOrigins[0].exact to reference that
value (e.g., {{ .Values.domain }} or a values.yaml entry) so environments can
override it via values.yaml or --set; update values.yaml to include the new
domain variable and adjust any chart templates/sets to consume it.
In `@Helmsman/hooks/esignet-1.7.1/esignet-init-db.sh`:
- Around line 28-43: The sed-based namespace replacement in the
esignet-init-db.sh steps that pipe kubectl get secret/configmap (...) | sed
"s/namespace: $POSTGRES_NS/namespace: $ESIGNET_NS/g" | kubectl apply -f - can
accidentally replace occurrences outside metadata.namespace; update those three
commands (the ones handling secret postgres-postgresql, secret
db-common-secrets, and configmap postgres-config) to use yq to set
.metadata.namespace = "$ESIGNET_NS" after fetching the resource with kubectl -n
"$POSTGRES_NS" get ... -o yaml, then pipe to kubectl apply -f - so only the
metadata.namespace field is changed while preserving all other YAML content.
In `@Helmsman/hooks/esignet-1.7.1/esignet-preinstall.sh`:
- Around line 39-46: The sed replacement can accidentally change occurrences of
"namespace: $SOFTHSM_NS" inside ConfigMap data; update the pipeline that reads
and rewrites the esignet-softhsm-share ConfigMap (the kubectl -o yaml | sed ...
| kubectl create -f - sequence manipulating variables SOFTHSM_NS and ESIGNET_NS)
to only replace the metadata namespace line—e.g., constrain the substitution to
lines starting with "namespace:" or only within the metadata block—so that
"namespace: ..." inside data fields is untouched; implement this by changing the
sed expression to match the start-of-line namespace key or by using a YAML-aware
tool (yq) to set metadata.namespace before creating the new ConfigMap.
In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh`:
- Around line 121-129: The jq invocation interpolates shell variables directly
and risks command injection; update the loop that iterates SECRET_KEYS
(variables key and val) to pass key and val into jq via --arg (e.g., --arg k
"$key" --arg v "$val") and use a programmatic filter like '.data[$k]=$v' instead
of embedding "$key" and "$val" into the filter string, then pipe that output to
kubectl apply; ensure you change the jq call in the block that reads val and
updates keycloak-client-secrets to use --arg for both variables.
In `@Helmsman/hooks/esignet-1.7.1/redis-setup.sh`:
- Around line 36-38: The values file hardcodes redis-host and redis-port which
assume Bitnami naming; change these to use environment/templated variables (e.g.
read REDIS_HOST and REDIS_PORT or Helm value placeholders) with sensible
defaults so non-Bitnami charts work. Replace the literal entries for
"redis-host" and "redis-port" with environment/templated substitutions and
update the consuming script (redis-setup.sh) to fall back to defaults if
REDIS_HOST/REDIS_PORT are unset, ensuring the deployment uses the
env/values-provided host and port rather than the hardcoded
"redis-master-0.redis-headless.redis.svc.cluster.local".
- Around line 21-23: The kubectl wait call in the redis-setup.sh script
currently will abort the post-install on timeout; to match other hooks add a
non-fatal fallback by appending a failure handler (e.g., "|| echo 'WARNING:
Redis pods not ready after timeout, continuing'") so the script continues on
timeout, or explicitly document/keep the failing behavior if Redis must be
hard-failed; update the kubectl invocation that references REDIS_NS and the pod
label app.kubernetes.io/name=redis accordingly to use the chosen fallback
behavior consistent with other hook scripts.
In `@terraform/implementations/aws/infra/variables.tf`:
- Around line 243-253: Add validation blocks to the implementation-level
Terraform variables activemq_storage_device and activemq_mount_point to mirror
module-level checks: for activemq_storage_device (variable name
activemq_storage_device) add a validation that the string is non-empty and
matches a device path pattern (e.g., starts with "/dev/"), and for
activemq_mount_point (variable name activemq_mount_point) add a validation that
it is non-empty and is an absolute path (e.g., starts with "/"). Use the
Terraform "validation" block with an appropriate condition and an explanatory
error message for each variable to provide earlier feedback and keep consistency
with terraform/modules/aws/variables.tf.
In `@terraform/modules/aws/activemq-setup/activemq-setup.yml`:
- Around line 99-105: The NFS export added by the Ansible task "Add ActiveMQ NFS
export entry" uses overly permissive options; update the lineinfile entry that
writes "{{ activemq_mount_point }}
*(rw,sync,no_root_squash,no_all_squash,insecure,subtree_check)" to use a
configurable, restricted client list and safer mount options: replace the "*"
with a variable like "activemq_nfs_allowed_hosts" (or a specific CIDR), remove
"no_root_squash" and "insecure" (or make them variables defaulting to false),
and document these new variables so the export policy can be tightened per
environment; ensure the change is applied in the same task that references
activemq_mount_point so it remains idempotent.
In `@terraform/modules/aws/aws-main.tf`:
- Around line 541-556: The depends_on for module "activemq-setup" includes
module.postgresql-setup even though that module is conditionally created; if
PostgreSQL is not a hard dependency remove module.postgresql-setup from the
depends_on list (or make the dependency conditional) so the dependency list
reflects actual ordering requirements; locate the depends_on attribute on module
"activemq-setup" and either delete the module.postgresql-setup entry or replace
it with a conditional/ternary inclusion tied to the postgresql count/enable
variable so the dependency is only present when the postgresql-setup module is
instantiated.
In `@terraform/modules/aws/aws-resource-creation/variables.tf`:
- Around line 182-193: The ActiveMQ EBS block currently sets encrypted = false
(in the conditional block gated by var.enable_activemq_setup and
var.NGINX_NODE_EBS_VOLUME_SIZE_3) which leaves the volume unencrypted; change
the encrypted flag to true (and if your environment requires a specific KMS key,
add kms_key_id) in this block and make the same change for the other two
NGINX/ActiveMQ EBS volume definitions so all three volumes use encrypted = true
for consistent encryption-at-rest across ActiveMQ volumes (refer to the
encrypted attribute, var.NGINX_NODE_EBS_VOLUME_SIZE_3, var.enable_activemq_setup
and local.TAG_NAME.NGINX_TAG_NAME to locate the blocks).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 8b80eefd-7d8a-42cb-b809-5303142359f2
📒 Files selected for processing (56)
.github/scripts/configure-backend.sh.github/workflows/helmsman_esignet.yml.github/workflows/helmsman_external.yml.github/workflows/helmsman_mosip.yml.github/workflows/helmsman_testrigs.yml.github/workflows/terraform-destroy.yml.github/workflows/terraform.ymlHelmsman/dsf/esignet/esignet-dsf.yamlHelmsman/dsf/esignet/external-dsf.yamlHelmsman/dsf/esignet/prereq-dsf.yamlHelmsman/dsf/mosip-platform-java11/esignet-dsf.yamlHelmsman/dsf/mosip-platform-java11/external-dsf.yamlHelmsman/dsf/mosip-platform-java11/mosip-dsf.yamlHelmsman/dsf/mosip-platform-java11/prereq-dsf.yamlHelmsman/dsf/mosip-platform-java11/testrigs-dsf.yamlHelmsman/dsf/mosip-platform-java21/esignet-dsf.yamlHelmsman/dsf/mosip-platform-java21/external-dsf.yamlHelmsman/dsf/mosip-platform-java21/mosip-dsf.yamlHelmsman/dsf/mosip-platform-java21/prereq-dsf.yamlHelmsman/dsf/mosip-platform-java21/testrigs-dsf.yamlHelmsman/hooks/esignet-1.7.1/captcha-postinstall.shHelmsman/hooks/esignet-1.7.1/esignet-demo-oidc-partner-onboarder-postinstall.shHelmsman/hooks/esignet-1.7.1/esignet-demo-oidc-partner-onboarder-preinstall.shHelmsman/hooks/esignet-1.7.1/esignet-init-db.shHelmsman/hooks/esignet-1.7.1/esignet-partner-onboarder-postinstall.shHelmsman/hooks/esignet-1.7.1/esignet-partner-onboarder-preinstall.shHelmsman/hooks/esignet-1.7.1/esignet-preinstall.shHelmsman/hooks/esignet-1.7.1/kafka-postinstall.shHelmsman/hooks/esignet-1.7.1/keycloak-postinstall.shHelmsman/hooks/esignet-1.7.1/mock-identity-system-preinstall.shHelmsman/hooks/esignet-1.7.1/mock-relying-party-service-preinstall.shHelmsman/hooks/esignet-1.7.1/mock-relying-party-ui-preinstall.shHelmsman/hooks/esignet-1.7.1/oidc-ui-preinstall.shHelmsman/hooks/esignet-1.7.1/postgres-postinstall.shHelmsman/hooks/esignet-1.7.1/postgres-preinstall.shHelmsman/hooks/esignet-1.7.1/redis-setup.shHelmsman/hooks/esignet-1.7.1/softhsm-esignet-postinstall.shHelmsman/hooks/esignet-1.7.1/softhsm-esignet-setup.shHelmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-postinstall.shHelmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-preinstall.shHelmsman/utils/rancher-logging-values.yamldocs/profile-based-deployment.drawioterraform/implementations/aws/infra/main.tfterraform/implementations/aws/infra/profiles/esignet/aws.tfvarsterraform/implementations/aws/infra/profiles/mosip/aws.tfvarsterraform/implementations/aws/infra/variables.tfterraform/infra/aws/main.tfterraform/infra/aws/variables.tfterraform/infra/main.tfterraform/infra/variables.tfterraform/modules/aws/activemq-setup/activemq-setup.shterraform/modules/aws/activemq-setup/activemq-setup.ymlterraform/modules/aws/activemq-setup/main.tfterraform/modules/aws/aws-main.tfterraform/modules/aws/aws-resource-creation/variables.tfterraform/modules/aws/variables.tf
Helmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-postinstall.sh
Show resolved
Hide resolved
Signed-off-by: bhumi46 <thisisbn46@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
terraform/modules/aws/activemq-setup/main.tf (1)
119-133:⚠️ Potential issue | 🟡 MinorNon-deterministic kubeconfig selection may cause unpredictable behavior.
Line 122 uses
find ... | head -1which arbitrarily picks one file if multiple.yamlfiles exist in the.kube/directory. This could lead to using an unexpected kubeconfig in environments with multiple cluster configurations.🔧 Proposed fix for deterministic kubeconfig selection
# Build kubeconfig path from the actual login user using find to locate the .yaml file - "export KUBECONFIG=$(find /home/${var.CONTROL_PLANE_USER}/.kube/ -name '*.yaml' | head -1)", + "KUBE_FILES=$(find /home/${var.CONTROL_PLANE_USER}/.kube/ -maxdepth 1 -name '*.yaml' -type f)", + "KUBE_COUNT=$(echo \"$KUBE_FILES\" | grep -c . || true)", + "if [ \"$KUBE_COUNT\" -gt 1 ]; then echo \"ERROR: Multiple kubeconfig files found in /home/${var.CONTROL_PLANE_USER}/.kube/: $KUBE_FILES\"; exit 1; fi", + "export KUBECONFIG=$(echo \"$KUBE_FILES\" | head -1)", # Fail immediately with a clear message if no valid file was found. "if [ -z \"$KUBECONFIG\" ] || [ ! -f \"$KUBECONFIG\" ]; then echo \"ERROR: kubeconfig not found in /home/${var.CONTROL_PLANE_USER}/.kube/\"; exit 1; fi",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/modules/aws/activemq-setup/main.tf` around lines 119 - 133, The remote-exec provisioner currently sets KUBECONFIG using "find ... | head -1", which is non-deterministic when multiple .yaml files exist under /home/${var.CONTROL_PLANE_USER}/.kube/; replace that selection with a deterministic strategy: explicitly prefer a well-known filename (e.g., "config" or ".kube/config") or apply a consistent selection rule (e.g., sort by name or modification time and pick the first/most-recent) when populating KUBECONFIG in the provisioner block, and keep the existing existence check/exit behavior; update the inline commands referencing KUBECONFIG and the comment above them to reflect the deterministic selection.
🧹 Nitpick comments (6)
terraform/modules/aws/activemq-setup/activemq-setup.sh (1)
73-81: Consider the security implications ofgalaxy_disable_gpg_verify = true.Disabling GPG verification for Ansible Galaxy downloads bypasses integrity checks on downloaded collections/roles. This is acceptable if the CI/CD environment is trusted and isolated, but worth noting for security awareness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/modules/aws/activemq-setup/activemq-setup.sh` around lines 73 - 81, The ansible.cfg being written by the activemq setup script disables Galaxy GPG verification (galaxy_disable_gpg_verify = true), which skips integrity checks; update the script to avoid hardcoding this insecure setting by either removing the galaxy_disable_gpg_verify line, setting it to false, or making it configurable via an environment variable or flag in the activemq-setup.sh script so callers can opt-in only in trusted CI contexts; look for the block that writes "$WORK_DIR/ansible.cfg" and the galaxy_disable_gpg_verify entry to implement the change.Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml (3)
36-37: Hardcoded domain in hook script arguments.The domain
sandbox.xyz.netis hardcoded in the Istio hook invocation. This may work for a sandbox environment but will need to be parameterized for other environments.Consider using an environment variable like
$ENV_DOMAINor a Helmsman variable substitution pattern if this DSF file is intended for multiple environments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml` around lines 36 - 37, Replace the hardcoded domain "sandbox.xyz.net" passed to the hook in both postInstall and postUpgrade so the DSF can be used across environments: update the invocations of install-istio-and-httpbin.sh (referenced in postInstall and postUpgrade) to use a parameterized variable (e.g., ${ENV_DOMAIN} or your Helmsman variable substitution) instead of the literal domain, and ensure the variable is documented or exported where the chart/Helmsman run will set it.
52-52: Placeholder values in alerting-setup hook require replacement.The hook script arguments contain literal placeholders (
<slack-channel-name>,<slack-api-url>,<env-name>) that must be replaced with actual values or environment variables before enablingrancher-monitoring. Since monitoring is disabled by default, this is acceptable for now, but consider documenting the required replacements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml` at line 52, The postInstall entry uses literal placeholders in the alerting setup call—replace "<slack-channel-name>", "<slack-api-url>", and "<env-name>" in the postInstall command that invokes "$WORKDIR/hooks/alerting-setup.sh" with real values or environment-variable references (e.g., ${SLACK_CHANNEL}, ${SLACK_API_URL}, ${ENV_NAME}) before enabling rancher-monitoring; update the postInstall string accordingly and add documentation or comments noting the required env vars so deployers know to set them.
88-89: Hardcoded Kibana host for istio-addons-logging.Similar to the hook scripts,
kibanaHostuses a hardcodedsandbox.xyz.netdomain. If this DSF is deployed across environments, this value should be parameterized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml` around lines 88 - 89, The kibanaHost key in the istio-addons-logging DSF currently hardcodes "kibana.sandbox.xyz.net"; change it to a configurable parameter (e.g., use a templated/values variable instead of a literal) so deployments can supply the correct domain per environment; update the kibanaHost value to reference the configurable variable (e.g., a chart/DSF value like kibanaHost or an environment placeholder) and ensure any code or templates that consume kibanaHost (the istio-addons-logging configuration and any references tied to installName "elasticsearch-kibana") are updated to read the parameter and have a sensible default documented..github/workflows/helmsman_testrigs.yml (1)
37-37: Consider upgrading toactions/checkout@v4for consistency.This workflow uses
actions/checkout@v2whilehelmsman_mosip.ymlandhelmsman_external.ymlusev4. Using a consistent version across workflows improves maintainability and ensures you benefit from security updates and performance improvements.Proposed fix
- name: Checkout repository - uses: actions/checkout@v2 + uses: actions/checkout@v4🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/helmsman_testrigs.yml at line 37, Update the workflow step that currently references actions/checkout@v2 to use actions/checkout@v4 for consistency with other workflows; locate the line containing "uses: actions/checkout@v2" in helmsman_testrigs.yml and change it to "uses: actions/checkout@v4", then run a quick workflow lint or dry run to ensure compatibility with any inputs or behavior differences..github/workflows/helmsman_external.yml (1)
276-282: Quote theMODEvariable to satisfy shellcheck.Static analysis flagged SC2086 on the curl command. While
MODEwill typically be a simple string like "apply" or "dry-run", quoting it is safer practice.Proposed fix
run: | curl -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $GITHUB_TOKEN" \ -H "X-GitHub-Api-Version: 2022-11-28" \ https://api.github.com/repos/$GITHUB_REPO/actions/workflows/helmsman_mosip.yml/dispatches \ - -d '{"ref":"'"$BRANCH"'","inputs":{"mode":"'"${MODE:-apply}"'","profile":"'"${{ needs.set-matrix.outputs.profile }}"'"}}' + -d '{"ref":"'"$BRANCH"'","inputs":{"mode":"'"${MODE:-apply}"'","profile":"'"${{ needs.set-matrix.outputs.profile }}"'"}}'Actually, the
MODEvariable usage${MODE:-apply}appears correctly quoted within the JSON string. The shellcheck warning may be a false positive since the variable is interpolated within single quotes that are concatenated with double-quoted segments. However, if you want to address it explicitly, you could rewrite usingprintforjqfor JSON construction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/helmsman_external.yml around lines 276 - 282, ShellCheck SC2086 warns MODE may be unquoted in the curl -d JSON payload; fix by constructing the JSON payload safely (so inputs.mode is always a quoted string) and passing it to curl as a single quoted/quoted variable instead of inline concatenation. Specifically, change the curl invocation that uses -d '{"ref":"'"$BRANCH"'","inputs":{"mode":"'"${MODE:-apply}"'","profile":"'"${{ needs.set-matrix.outputs.profile }}"'"}}' to build the payload with a safe formatter (e.g., printf or jq) and then call curl -d "$PAYLOAD" so that MODE (and other inputs) are properly quoted; target the curl command and the inputs.mode usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/helmsman_external.yml:
- Around line 66-74: The current assignment of changed_profiles -> PROFILE can
produce a newline-separated list which breaks the subsequent startsWith check
and the workflow dispatch payload; update the logic to either (A) filter
changed_profiles to only MOSIP profiles (e.g., grep '^mosip' or match the MOSIP
naming convention) and then set PROFILE to that filtered single or multi-entry
list, and ensure the startsWith check tests against that filtered value, or (B)
split changed_profiles into individual profile names and iterate: for each
profile (from the split of changed_profiles) run the startsWith/condition check
and perform a separate workflow_dispatch for that single PROFILE value so each
dispatch gets a single-line PROFILE; reference the variables changed_profiles
and PROFILE and the places that perform the startsWith check and the workflow
dispatch to implement the chosen approach.
In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh`:
- Around line 102-103: The script masks failures of the helm repo add command by
appending "|| true" to "helm repo add mosip https://mosip.github.io/mosip-helm",
which can leave a stale repo if the URL differs; replace the "helm repo add
mosip ..." invocation so it uses the helm flag to force update the entry (use
--force-update) instead of swallowing errors, and remove the "|| true" so
failures surface; update the command call in the keycloak-postinstall.sh where
"helm repo add mosip" is invoked.
- Around line 31-34: The current pipeline that clones the keycloak-host
configmap across namespaces (kubectl -n "$KEYCLOAK_NS" get configmap
keycloak-host -o yaml | sed ... | kubectl apply -f -) fails because it preserves
server-managed metadata and Helm ownership annotations; update the transform to
strip metadata.uid, metadata.resourceVersion, metadata.creationTimestamp,
metadata.managedFields and any helm.sh/release or meta.helm.sh annotations
before applying to $ESIGNET_NS (e.g., add a filter step that removes those
metadata fields and helm annotations from the YAML produced by get), then apply
the cleaned YAML; repeat the same metadata/annotation-stripping change for the
equivalent pipelines at the locations handling other objects (the blocks
referenced around lines 47–49, 55–57, and 132–134) so all cross-namespace
applies use sanitized manifests.
- Around line 86-92: The current command substitution for val masks
kubectl/base64 failures with "|| echo """, causing silent secret loss; change
the logic so command failures are detected and cause the script to fail instead
of returning an empty string: run the kubectl | base64 -d pipeline without the
"|| echo \"\"" fallback, capture its exit status (for example by using an if !
val=$(kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets -o
jsonpath="{.data.$key}" 2>/dev/null | base64 -d 2>/dev/null); then handle the
error) and on failure emit an error and exit non‑zero; only when the pipeline
succeeds use HELM_SET_SECRETS with the escaped value (referenced symbols: val,
KEYCLOAK_NS, HELM_SET_SECRETS, escape_helm_value).
---
Duplicate comments:
In `@terraform/modules/aws/activemq-setup/main.tf`:
- Around line 119-133: The remote-exec provisioner currently sets KUBECONFIG
using "find ... | head -1", which is non-deterministic when multiple .yaml files
exist under /home/${var.CONTROL_PLANE_USER}/.kube/; replace that selection with
a deterministic strategy: explicitly prefer a well-known filename (e.g.,
"config" or ".kube/config") or apply a consistent selection rule (e.g., sort by
name or modification time and pick the first/most-recent) when populating
KUBECONFIG in the provisioner block, and keep the existing existence check/exit
behavior; update the inline commands referencing KUBECONFIG and the comment
above them to reflect the deterministic selection.
---
Nitpick comments:
In @.github/workflows/helmsman_external.yml:
- Around line 276-282: ShellCheck SC2086 warns MODE may be unquoted in the curl
-d JSON payload; fix by constructing the JSON payload safely (so inputs.mode is
always a quoted string) and passing it to curl as a single quoted/quoted
variable instead of inline concatenation. Specifically, change the curl
invocation that uses -d
'{"ref":"'"$BRANCH"'","inputs":{"mode":"'"${MODE:-apply}"'","profile":"'"${{
needs.set-matrix.outputs.profile }}"'"}}' to build the payload with a safe
formatter (e.g., printf or jq) and then call curl -d "$PAYLOAD" so that MODE
(and other inputs) are properly quoted; target the curl command and the
inputs.mode usage.
In @.github/workflows/helmsman_testrigs.yml:
- Line 37: Update the workflow step that currently references
actions/checkout@v2 to use actions/checkout@v4 for consistency with other
workflows; locate the line containing "uses: actions/checkout@v2" in
helmsman_testrigs.yml and change it to "uses: actions/checkout@v4", then run a
quick workflow lint or dry run to ensure compatibility with any inputs or
behavior differences.
In `@Helmsman/dsf/mosip-platform-java11/prereq-dsf.yaml`:
- Around line 36-37: Replace the hardcoded domain "sandbox.xyz.net" passed to
the hook in both postInstall and postUpgrade so the DSF can be used across
environments: update the invocations of install-istio-and-httpbin.sh (referenced
in postInstall and postUpgrade) to use a parameterized variable (e.g.,
${ENV_DOMAIN} or your Helmsman variable substitution) instead of the literal
domain, and ensure the variable is documented or exported where the
chart/Helmsman run will set it.
- Line 52: The postInstall entry uses literal placeholders in the alerting setup
call—replace "<slack-channel-name>", "<slack-api-url>", and "<env-name>" in the
postInstall command that invokes "$WORKDIR/hooks/alerting-setup.sh" with real
values or environment-variable references (e.g., ${SLACK_CHANNEL},
${SLACK_API_URL}, ${ENV_NAME}) before enabling rancher-monitoring; update the
postInstall string accordingly and add documentation or comments noting the
required env vars so deployers know to set them.
- Around line 88-89: The kibanaHost key in the istio-addons-logging DSF
currently hardcodes "kibana.sandbox.xyz.net"; change it to a configurable
parameter (e.g., use a templated/values variable instead of a literal) so
deployments can supply the correct domain per environment; update the kibanaHost
value to reference the configurable variable (e.g., a chart/DSF value like
kibanaHost or an environment placeholder) and ensure any code or templates that
consume kibanaHost (the istio-addons-logging configuration and any references
tied to installName "elasticsearch-kibana") are updated to read the parameter
and have a sensible default documented.
In `@terraform/modules/aws/activemq-setup/activemq-setup.sh`:
- Around line 73-81: The ansible.cfg being written by the activemq setup script
disables Galaxy GPG verification (galaxy_disable_gpg_verify = true), which skips
integrity checks; update the script to avoid hardcoding this insecure setting by
either removing the galaxy_disable_gpg_verify line, setting it to false, or
making it configurable via an environment variable or flag in the
activemq-setup.sh script so callers can opt-in only in trusted CI contexts; look
for the block that writes "$WORK_DIR/ansible.cfg" and the
galaxy_disable_gpg_verify entry to implement the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 05e7c86e-f423-46eb-951e-903b32a99031
📒 Files selected for processing (19)
.github/workflows/helmsman_esignet.yml.github/workflows/helmsman_external.yml.github/workflows/helmsman_mosip.yml.github/workflows/helmsman_testrigs.ymlHelmsman/dsf/mosip-platform-java11/prereq-dsf.yamlHelmsman/dsf/mosip-platform-java21/esignet-dsf.yamlHelmsman/dsf/mosip-platform-java21/prereq-dsf.yamlHelmsman/dsf/mosip-platform-java21/testrigs-dsf.yamlHelmsman/hooks/esignet-1.7.1/captcha-postinstall.shHelmsman/hooks/esignet-1.7.1/esignet-init-db.shHelmsman/hooks/esignet-1.7.1/esignet-preinstall.shHelmsman/hooks/esignet-1.7.1/keycloak-postinstall.shHelmsman/hooks/esignet-1.7.1/redis-setup.shHelmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-postinstall.shterraform/implementations/aws/infra/profiles/esignet/aws.tfvarsterraform/implementations/aws/infra/variables.tfterraform/modules/aws/activemq-setup/activemq-setup.shterraform/modules/aws/activemq-setup/activemq-setup.ymlterraform/modules/aws/activemq-setup/main.tf
✅ Files skipped from review due to trivial changes (4)
- Helmsman/hooks/esignet-1.7.1/esignet-init-db.sh
- terraform/implementations/aws/infra/variables.tf
- Helmsman/dsf/mosip-platform-java21/esignet-dsf.yaml
- terraform/implementations/aws/infra/profiles/esignet/aws.tfvars
🚧 Files skipped from review as they are similar to previous changes (7)
- Helmsman/hooks/esignet-1.7.1/softhsm-mock-identity-system-postinstall.sh
- Helmsman/hooks/esignet-1.7.1/redis-setup.sh
- Helmsman/dsf/mosip-platform-java21/prereq-dsf.yaml
- Helmsman/hooks/esignet-1.7.1/esignet-preinstall.sh
- terraform/modules/aws/activemq-setup/activemq-setup.yml
- Helmsman/dsf/mosip-platform-java21/testrigs-dsf.yaml
- Helmsman/hooks/esignet-1.7.1/captcha-postinstall.sh
| if kubectl -n "$KEYCLOAK_NS" get configmap keycloak-host &>/dev/null; then | ||
| kubectl -n "$KEYCLOAK_NS" get configmap keycloak-host -o yaml | \ | ||
| sed "s/namespace: $KEYCLOAK_NS/namespace: $ESIGNET_NS/g" | \ | ||
| kubectl apply -f - |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
When cloning Kubernetes Secrets or ConfigMaps across namespaces from kubectl get -o yaml/json, which metadata and annotations should be stripped before kubectl apply, and can metadata.resourceVersion/uid or Helm ownership annotations cause create/apply problems?
💡 Result:
When cloning Kubernetes Secrets or ConfigMaps across namespaces using kubectl get -o yaml (or json) followed by editing and kubectl apply, strip the following metadata fields from metadata before applying, as they are cluster-specific, immutable on create, or cause conflicts: - resourceVersion: Causes "the object has been modified" errors on apply if mismatched with live object. - uid: Immutable; specifying wrong uid on existing object fails create/update with "field is immutable". - creationTimestamp: Managed by server; including causes validation issues. - selfLink: Deprecated and cluster-specific. - namespace: Must be manually changed to target namespace. - managedFields: Server-side apply ownership tracking; causes conflicts. - status: Entire section; not for client input. Also strip or review annotations: - kubectl.kubernetes.io/last-applied-configuration: kubectl apply tracking; remove unless matching your intent. Helm ownership annotations (meta.helm.sh/release-name, meta.helm.sh/release-namespace, label app.kubernetes.io/managed-by: Helm) do not directly cause kubectl apply problems, as kubectl ignores them. However, if the target namespace has the same-named resource managed by Helm with mismatched ownership metadata, Helm (not kubectl) will refuse upgrades/installs citing "invalid ownership metadata". For pure kubectl cloning, these are safe to leave or strip; they become regular annotations on the new object. Example workflow: kubectl get secret mysecret -n source-ns -o yaml | \ yq 'del(.metadata.namespace, .metadata.resourceVersion, .metadata.uid, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields, .status) | .metadata.namespace="target-ns"' | \ kubectl apply -n target-ns -f - --export (which automated stripping) is deprecated since v1.14 and removed in v1.18; manual/tools like yq/jq required. Yes, metadata.resourceVersion/uid cause create/apply problems on existing objects (optimistic concurrency failure or immutable field error). Helm annotations cause issues only in Helm workflows.
Citations:
- 1: https://stackoverflow.com/questions/46297949/is-there-a-way-to-share-secrets-across-namespaces-in-kubernetes
- 2: https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file
- 3: application/apply-patch+yaml does not retain ResourceVersion when patching v1/Secret:stringData kubernetes/kubernetes#96841
- 4: https://stackoverflow.com/questions/66077782/correct-yaml-format-for-kubernetes-migration
- 5: Inconsistent error message when attempting SSA with the wrong uid kubernetes/kubernetes#127625
- 6: https://stackoverflow.com/questions/51297136/kubectl-error-the-object-has-been-modified-please-apply-your-changes-to-the-la
- 7: https://stackoverflow.com/questions/61392206/kubectl-export-is-deprecated-any-alternative
- 8: https://stackoverflow.com/questions/64608453/how-do-i-export-a-kubernetes-manifest-file-without-the-metadata
- 9: https://stackoverflow.com/questions/63016165/deploying-helm-release-forcefully-when-same-name-deployments-svcs-etc-are-run
- 10: https://github-wiki-see.page/m/akenel/ai-base-starter-kit/wiki/Troubleshooting-Helm-Deployment-%26-Fixing-Ownership-Issues-in-Kubernetes-%E2%80%90-kubectl-patch
- 11: Are helm ownership label/annotations applied to hook resources as well? helm/helm#8944
Strip server-managed metadata before reapplying these objects in another namespace.
When cloning Kubernetes objects across namespaces using kubectl get ... -o yaml | sed ... | kubectl apply, server-managed metadata fields (resourceVersion, uid, creationTimestamp, managedFields) carry mutable or immutable constraints that cause conflicts on apply. resourceVersion mismatches trigger "object has been modified" errors; uid is immutable and causes validation failures on existing objects. Additionally, Helm ownership annotations should be stripped in this Helm hook context.
Proposed fix
- kubectl -n "$KEYCLOAK_NS" get configmap keycloak-host -o yaml | \
- sed "s/namespace: $KEYCLOAK_NS/namespace: $ESIGNET_NS/g" | \
- kubectl apply -f -
+ kubectl -n "$KEYCLOAK_NS" get configmap keycloak-host -o json | \
+ jq --arg ns "$ESIGNET_NS" '
+ del(
+ .metadata.uid,
+ .metadata.resourceVersion,
+ .metadata.creationTimestamp,
+ .metadata.managedFields,
+ .metadata.ownerReferences,
+ .metadata.annotations["kubectl.kubernetes.io/last-applied-configuration"],
+ .metadata.annotations["meta.helm.sh/release-name"],
+ .metadata.annotations["meta.helm.sh/release-namespace"]
+ )
+ | .metadata.namespace = $ns
+ ' | kubectl apply -f -Apply the same transform to Lines 47–49, 55–57, and 132–134.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh` around lines 31 - 34,
The current pipeline that clones the keycloak-host configmap across namespaces
(kubectl -n "$KEYCLOAK_NS" get configmap keycloak-host -o yaml | sed ... |
kubectl apply -f -) fails because it preserves server-managed metadata and Helm
ownership annotations; update the transform to strip metadata.uid,
metadata.resourceVersion, metadata.creationTimestamp, metadata.managedFields and
any helm.sh/release or meta.helm.sh annotations before applying to $ESIGNET_NS
(e.g., add a filter step that removes those metadata fields and helm annotations
from the YAML produced by get), then apply the cleaned YAML; repeat the same
metadata/annotation-stripping change for the equivalent pipelines at the
locations handling other objects (the blocks referenced around lines 47–49,
55–57, and 132–134) so all cross-namespace applies use sanitized manifests.
| val=$(kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets \ | ||
| -o jsonpath="{.data.$key}" 2>/dev/null | base64 -d 2>/dev/null || echo "") | ||
| if [[ -n "$val" ]]; then | ||
| HELM_SET_SECRETS+=( | ||
| --set-string "clientSecrets[$idx].name=$key" | ||
| --set-string "clientSecrets[$idx].secret=$(escape_helm_value "$val")" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Bash with set -euo pipefail, does val=$(kubectl ... | base64 -d || echo "") mask read/decode failures inside command substitution and continue as success?
💡 Result:
Yes, val=$(kubectl ... | base64 -d || echo "") masks read/decode failures inside the command substitution and continues as success under set -euo pipefail. The || echo "" makes the entire command inside the
Citations:
- 1: https://oneuptime.com/blog/post/2026-01-24-bash-set-e-error-handling/view
- 2: https://eagleeyet.net/blog/operating-systems/linux/set-euo-pipefail-why-this-one-line-separates-fragile-scripts-from-reliable-systems/
- 3: https://stackoverflow.com/questions/58859554/how-to-use-set-e-inside-a-bash-command-substitution
- 4: https://stackoverflow.com/questions/19054868/using-command-substitution-or-similar-but-still-having-script-exit-using-set
- 5: https://unix.stackexchange.com/questions/296526/set-e-in-a-subshell
- 6: https://unix.stackexchange.com/questions/312631/bash-script-with-set-e-doesnt-stop-on-command
- 7: https://stackoverflow.com/questions/79893587/is-the-way-bash-evaluates-this-subshell-with-set-e-a-bug
- 8: https://stackoverflow.com/questions/19624308/use-of-in-a-bash-pipeline
- 9: https://www.shellcheck.net/wiki/SC2312
- 10: https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
🏁 Script executed:
cd /tmp && find . -name "keycloak-postinstall.sh" -type f 2>/dev/null | head -5Repository: mosip/infra
Length of output: 37
🏁 Script executed:
# Search for the file more broadly
find . -path "*esignet-1.7.1*" -name "*keycloak*" -type f 2>/dev/nullRepository: mosip/infra
Length of output: 109
🏁 Script executed:
# Search by filename pattern
fd "keycloak-postinstall" . 2>/dev/nullRepository: mosip/infra
Length of output: 109
🏁 Script executed:
cat -n ./Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh | head -20Repository: mosip/infra
Length of output: 1189
🏁 Script executed:
# Check for set -euo pipefail at the start
head -15 ./Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.shRepository: mosip/infra
Length of output: 838
🏁 Script executed:
# Examine lines 80-100
sed -n '80,100p' ./Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.shRepository: mosip/infra
Length of output: 841
🏁 Script executed:
# Examine lines 110-140 for the sync-back logic mentioned
sed -n '110,140p' ./Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.shRepository: mosip/infra
Length of output: 1464
Don't treat read/decode failures as "secret missing."
If kubectl or base64 -d fails on lines 86–87, the || echo "" masks that failure even under set -euo pipefail. The command substitution succeeds with an empty value, the script continues without the preserved client secret, and the Helm install proceeds with a new/default credential. This is then synced back to the keycloak namespace (lines 117–136), silently rotating or dropping the existing credential instead of failing fast.
Proposed fix
- val=$(kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets \
- -o jsonpath="{.data.$key}" 2>/dev/null | base64 -d 2>/dev/null || echo "")
- if [[ -n "$val" ]]; then
+ raw_val=$(kubectl -n "$KEYCLOAK_NS" get secret keycloak-client-secrets \
+ -o jsonpath="{.data.$key}" 2>/dev/null) || {
+ echo "Failed to read $key from keycloak-client-secrets" >&2
+ exit 1
+ }
+ if [[ -n "$raw_val" ]]; then
+ val=$(printf '%s' "$raw_val" | base64 -d 2>/dev/null) || {
+ echo "Failed to decode $key from keycloak-client-secrets" >&2
+ exit 1
+ }
HELM_SET_SECRETS+=(
--set-string "clientSecrets[$idx].name=$key"
--set-string "clientSecrets[$idx].secret=$(escape_helm_value "$val")"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Helmsman/hooks/esignet-1.7.1/keycloak-postinstall.sh` around lines 86 - 92,
The current command substitution for val masks kubectl/base64 failures with "||
echo """, causing silent secret loss; change the logic so command failures are
detected and cause the script to fail instead of returning an empty string: run
the kubectl | base64 -d pipeline without the "|| echo \"\"" fallback, capture
its exit status (for example by using an if ! val=$(kubectl -n "$KEYCLOAK_NS"
get secret keycloak-client-secrets -o jsonpath="{.data.$key}" 2>/dev/null |
base64 -d 2>/dev/null); then handle the error) and on failure emit an error and
exit non‑zero; only when the pipeline succeeds use HELM_SET_SECRETS with the
escaped value (referenced symbols: val, KEYCLOAK_NS, HELM_SET_SECRETS,
escape_helm_value).
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: bhumi46 <111699703+bhumi46@users.noreply.github.com>
Summary
mosip-platform-java11workflow-callerinhelmsman_external.ymlto pass the correct profile when auto-triggeringhelmsman_mosip.yml(both onworkflow_dispatchand push events)java11fallback with explicit error when profile cannot be detected from push eventFiles Changed
.github/workflows/helmsman_external.yml.github/workflows/helmsman_mosip.yml.github/workflows/helmsman_esignet.yml.github/workflows/helmsman_testrigs.ymlTest plan
helmsman_external.ymlviaworkflow_dispatchwithmosip-platform-java21— verifyhelmsman_mosip.ymlis triggered withjava21Helmsman/dsf/mosip-platform-java21/external-dsf.yaml— verify workflow deploysjava21profileHelmsman/dsf/esignet/esignet-dsf.yaml— verifyhelmsman_esignet.ymlusesesignetprofile🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation