You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MQ Server pods fail when deploying to AWS using EFS storage
name: MQ Server pods fail when deploying to AWS using EFS storage
Describe the bug
When deploying the solution to an Openshift 4.4 cluster running in AWS and using AWS EFS for persistent storage the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods fail with the message Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
To Reproduce
Steps to reproduce the behavior:
Provision an EFS Filesystem in AWS and give access to the security groups used by the OCP nodes
Create required directories on the EFS filesystem - Deployment fails if directories do not exist
Rerun the deployment command - it always fails first time with the message Error: secrets "spm-mq-credentials" already exists
Wait for deployment to complete
Expected behavior
Solution to be deployed to Openshift and the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods to move into a running state.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
* Openshift Version: [4.4.26]
* Cúram SPM Version: [7.0.10]
Additional context
Add any other context about the problem here.
Log Collection
2020-10-23T11:59:07.670Z CPU architecture: amd64
2020-10-23T11:59:07.670Z Linux kernel version: 4.18.0-193.23.1.el8_2.x86_64
2020-10-23T11:59:07.670Z Container runtime: kube
2020-10-23T11:59:07.670Z Base image: Red Hat Enterprise Linux Server 7.6 (Maipo)
2020-10-23T11:59:07.672Z Running as user ID 1000590000 (1000590000 user) with primary group 0, and supplementary groups 1000590000
2020-10-23T11:59:07.672Z Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot
2020-10-23T11:59:07.672Z seccomp enforcing mode: disabled
2020-10-23T11:59:07.672Z Process security attributes: system_u:system_r:container_t:s0:c19,c24
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm-log
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm-data
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm
2020-10-23T11:59:07.672Z Multi-instance queue manager: enabled
2020-10-23T11:59:07.674Z Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
The text was updated successfully, but these errors were encountered:
If the storage has been initialized using a prior version the runbook (which used an older version of MQ), you may need to update your override values to include a supplementary group:
MQ Server pods fail when deploying to AWS using EFS storage
name: MQ Server pods fail when deploying to AWS using EFS storage
Describe the bug
When deploying the solution to an Openshift 4.4 cluster running in AWS and using AWS EFS for persistent storage the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods fail with the message Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Solution to be deployed to Openshift and the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods to move into a running state.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
* Openshift Version: [4.4.26]
* Cúram SPM Version: [7.0.10]
Additional context
Add any other context about the problem here.
Log Collection
The text was updated successfully, but these errors were encountered: