Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Cannot exec into a privileged container - glusterfs #2200

Closed
zrahui opened this issue Feb 5, 2018 · 5 comments
Closed

Cannot exec into a privileged container - glusterfs #2200

zrahui opened this issue Feb 5, 2018 · 5 comments

Comments

@zrahui
Copy link

zrahui commented Feb 5, 2018

Is this a request for help?: Yes


Is this an ISSUE or FEATURE REQUEST? (choose one): Issue


What version of acs-engine?: 0.12.5


Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.9.1

What happened:
Cannot exec into glusterfs pods is preventing the installation of GlusterFS into ACS-Engine enabled cluster. This used to work but i believe #1961 has killed this.

What you expected to happen:
Successfully install GlusterFS using the gluster-kubernetes repo.

How to reproduce it (as minimally and precisely as possible):
Run the ./gk-deploy scripts as part of the gluster-kubernetes repo to provision Gluster into the cluster.

Anything else we need to know:
Ideally, I'd like to know if this can be disabled for specific pods to enable GlusterFS to be deployed. Any help is much appreciated.
We've also got issues with using hostNetwork as part of that same repo which seems to kill DNS in an Azure CNI enabled cluster but that's another issue altogether....

@zrahui
Copy link
Author

zrahui commented Feb 5, 2018

FYI: I've managed to remove the DenyEscalatingExec flag from the apiserver yaml to workaround this for now

@pidah
Copy link
Contributor

pidah commented Feb 5, 2018

@zrahui you can customize/override the admission-controller flags passed to the API server using the apiServerConfig option as follows:

  "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "1.8",
      "kubernetesConfig": {
        "apiServerConfig": {
          "--admission-control":  "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages"
        }
      }
    } 

@zrahui
Copy link
Author

zrahui commented Feb 6, 2018

Thanks @pidah that's a more elegant solution 👍

@zrahui zrahui closed this as completed Feb 6, 2018
@jalberto
Copy link

jalberto commented Apr 3, 2018

@zrahui this requires a new deployment. How to fix it without creating a new cluster?

@huydinhle
Copy link

@jalberto you can only change it on the master nodes at /etc/kubenetes/manifest

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants