-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Closed
linkerd/linkerd2-proxy-init
#225Description
What is the issue?
Linkerd2-proxy failed to run with privileged permission in CentOS 8.
The install params are --set proxyInit.runAsRoot=true --set "proxyInit.iptablesMode=nft"
.
Try many version, all fail.
How can it be reproduced?
diff --git a/charts/partials/templates/_proxy.tpl b/charts/partials/templates/_proxy.tpl
index 8caf7d384..f9a633340 100644
--- a/charts/partials/templates/_proxy.tpl
+++ b/charts/partials/templates/_proxy.tpl
@@ -156,15 +156,9 @@ readinessProbe:
{{ include "partials.resources" .Values.proxy.resources }}
{{- end }}
securityContext:
- allowPrivilegeEscalation: false
- {{- if .Values.proxy.capabilities -}}
- {{- include "partials.proxy.capabilities" . | nindent 2 -}}
- {{- end }}
- readOnlyRootFilesystem: true
- runAsNonRoot: true
- runAsUser: {{.Values.proxy.uid}}
- seccompProfile:
- type: RuntimeDefault
+ privileged: true
+ runAsNonRoot: false
+ runAsUser: 0
terminationMessagePolicy: FallbackToLogsOnError
{{- if or (.Values.proxy.await) (.Values.proxy.waitBeforeExitSeconds) }}
lifecycle:
Then deploy as below:
bin/linkerd install --crds | kubectl apply -f - && bin/linkerd install --set proxyInit.runAsRoot=true --set "proxyInit.iptablesMode=nft" | kubectl apply -f -
Logs, error output, etc
It failed to work in 2m:
linkerd linkerd-destination-bd75949c5-mw927 0/4 PostStartHookError 4 (2s ago) 5m7s 10.244.0.79 spr-loong.localdomain <none> <none>
linkerd linkerd-identity-6fc966499f-x9qf2 2/2 Running 0 5m7s 10.244.0.80 spr-loong.localdomain <none> <none>
linkerd linkerd-proxy-injector-c7cdd5c74-49xjk 0/2 PostStartHookError 1 (2s ago) 5m7s 10.244.0.81 spr-loong.localdomain <none> <none>
Then destination and proxy-injector pods crashed.
destination pod log:
[ 133.029989s] ERROR ThreadId(02) identity: linkerd_proxy_identity_client::certify: Failed to obtain identity error=status: Unknown, message: "controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast", details: [], metadata: MetadataMap { headers: {} } error.sources=[controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast, service in fail-fast]
[ 135.014859s] WARN ThreadId(01) linkerd_app: Waiting for identity to be initialized...
[ 144.032957s] WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.80:8080}: linkerd_reconnect: Failed to connect error=endpoint 10.244.0.80:8080: connect timed out after 1s error.sources=[connect timed out after 1s]
[ 145.136510s] WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.80:8080}: linkerd_reconnect: Failed to connect error=endpoint 10.244.0.80:8080: connect timed out after 1s error.sources=[connect timed out after 1s]
[ 146.031966s] WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_stack::failfast: Service entering failfast after 3s
[ 146.032047s] ERROR ThreadId(02) identity: linkerd_proxy_identity_client::certify: Failed to obtain identity error=status: Unknown, message: "controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast", details: [], metadata: MetadataMap { headers: {} } error.sources=[controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast, service in fail-fast]
[ 150.015555s] WARN ThreadId(01) linkerd_app: Waiting for identity to be initialized...
output of linkerd check -o short
N/A
Environment
- K8s version: 1.26
- Host OS: CentOS 8
- Linkerd2: edge-23.3.1 and some others
Possible solution
No response
Additional context
No response
Would you like to work on fixing this bug?
yes