Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/etc/kubernetes/azure.json: permission denied #729

Closed
toutougabi opened this issue Nov 21, 2018 · 10 comments
Closed

/etc/kubernetes/azure.json: permission denied #729

toutougabi opened this issue Nov 21, 2018 · 10 comments

Comments

@toutougabi
Copy link

What happened:
Since today, our pods cannot access /etc/kubernetes/azure.json.
Worked for the last 50 days but since this morning, our custom External DNS pods crash on start with this error : failed to read Azure config file '/etc/kubernetes/azure.json': open /etc/kubernetes/azure.json: permission denied
What you expected to happen:
Used to work before
How to reproduce it (as minimally and precisely as possible):
Deploy a custom teapot External DNS deployment
Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.11
  • Size of cluster (how many worker nodes are in the cluster?) : 1
  • General description of workloads in the cluster small HTTP micro-services
  • Others:
@andyzhangx
Copy link
Contributor

@toutougabi could you share the teapot External DNS deployment config?

@toutougabi
Copy link
Author

toutougabi commented Nov 21, 2018

apiVersion: v1 kind: ServiceAccount metadata: name: external-dns

@toutougabi
Copy link
Author

`apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:

  • apiGroups: [""]
    resources: ["services"]
    verbs: ["get","watch","list"]
  • apiGroups: [""]
    resources: ["pods"]
    verbs: ["get","watch","list"]
  • apiGroups: ["extensions"]
    resources: ["ingresses"]
    verbs: ["get","watch","list"]
  • apiGroups: [""]
    resources: ["nodes"]
    verbs: ["list"]`

@toutougabi
Copy link
Author

`apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:

  • kind: ServiceAccount
    name: external-dns
    namespace: default`

@toutougabi
Copy link
Author

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:latest
args:
- --source=service
- --source=ingress
- --provider=azure
- --azure-resource-group=dns
- --interval=0m5s
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes/azure.json
readOnly: true
volumes:
- name: azure-config-file
hostPath:
path: /etc/kubernetes/azure.json
type: File

@andyzhangx
Copy link
Contributor

andyzhangx commented Nov 22, 2018

@toutougabi have you ssh to that node ,and check /etc/kubernetes/azure.json manually? recently we found a bug related with kms encryption: Azure/aks-engine#49, not sure it's caused by that.

@ritazh
Copy link
Member

ritazh commented Nov 22, 2018

@andyzhangx This is on AKS right? I don't think we have kms enabled there. Also kms only impacts master nodes, not agent nodes.

@toutougabi
Copy link
Author

Yes on AKS

@tomasr
Copy link

tomasr commented Nov 22, 2018

Does your teapot container run as root? AFAIK, the azure.json file is always mounted with 600 permissions, so unless the container is running under root, it won't be able to read it.

You're running the latest external-dns image and I believe this was indeed recently changed: kubernetes-sigs/external-dns#684.

If that's the case, you could try configuring the security context back to root with runAsUser: 0, or, do as the docs say and manually create your own secret file to mount as a volume that does not have the permissions issue.

@jnoller
Copy link
Contributor

jnoller commented Apr 4, 2019

Closing this issue as old/stale.

If this issue still comes up, please confirm you are running the latest AKS release. If you are on the latest release and the issue can be re-created outside of your specific cluster please open a new github issue.

If you are only seeing this behavior on clusters with a unique configuration (such as custom DNS/VNet/etc) please open an Azure technical support ticket.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants