Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MQ Server pods fail when deploying to AWS using EFS storage #42

Closed
daisleyj opened this issue Oct 23, 2020 · 3 comments
Closed

MQ Server pods fail when deploying to AWS using EFS storage #42

daisleyj opened this issue Oct 23, 2020 · 3 comments

Comments

@daisleyj
Copy link

MQ Server pods fail when deploying to AWS using EFS storage


name: MQ Server pods fail when deploying to AWS using EFS storage


Describe the bug
When deploying the solution to an Openshift 4.4 cluster running in AWS and using AWS EFS for persistent storage the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods fail with the message Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

To Reproduce
Steps to reproduce the behavior:

  1. Provision an EFS Filesystem in AWS and give access to the security groups used by the OCP nodes
  2. Create required directories on the EFS filesystem - Deployment fails if directories do not exist
  3. Deploy helm charts using the command
    helm upgrade --install spm local-development/spm -f curam_containerisation/static/resources/os-values.yaml
  4. Rerun the deployment command - it always fails first time with the message Error: secrets "spm-mq-credentials" already exists
  5. Wait for deployment to complete

Expected behavior
Solution to be deployed to Openshift and the spm-mqserver-curam-0 and spm-mqserver-rest-0 pods to move into a running state.

Screenshots
If applicable, add screenshots to help explain your problem.

Please complete the following information:
* Openshift Version: [4.4.26]
* Cúram SPM Version: [7.0.10]

Additional context
Add any other context about the problem here.

Log Collection

2020-10-23T11:59:07.670Z CPU architecture: amd64
2020-10-23T11:59:07.670Z Linux kernel version: 4.18.0-193.23.1.el8_2.x86_64
2020-10-23T11:59:07.670Z Container runtime: kube
2020-10-23T11:59:07.670Z Base image: Red Hat Enterprise Linux Server 7.6 (Maipo)
2020-10-23T11:59:07.672Z Running as user ID 1000590000 (1000590000 user) with primary group 0, and supplementary groups 1000590000
2020-10-23T11:59:07.672Z Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot
2020-10-23T11:59:07.672Z seccomp enforcing mode: disabled
2020-10-23T11:59:07.672Z Process security attributes: system_u:system_r:container_t:s0:c19,c24
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm-log
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm-data
2020-10-23T11:59:07.672Z Detected 'nfs4' volume mounted to /mnt/mqm
2020-10-23T11:59:07.672Z Multi-instance queue manager: enabled
2020-10-23T11:59:07.674Z Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
@ourboy
Copy link
Contributor

ourboy commented Oct 23, 2020

@daisleyj

Can you confirm what version of the runbook you are using?

Do you have any customization of the MQ helm chart which you are using to deploy? if so could you please attach.

@andreyzher
Copy link
Contributor

Hi @daisleyj ,

If the storage has been initialized using a prior version the runbook (which used an older version of MQ), you may need to update your override values to include a supplementary group:

global:
  mq:
    security:
      context:
        supplementalGroups: [888]

Source: IBM MQ upstream chart

@daisleyj
Copy link
Author

Thank you @andreyzher, adding the supplemental group 888 has got the pods up correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants