-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
simple ClickHouse instance with PVC Installation is failing while starting pod #1464
Comments
Let's check, is /var/lib/clickhouse present in pod mounts ? |
It is there
|
Which component responsible for second container with sleep 30 and securityContext? I think the root reason is securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000730000 to strict security context could we change it to following according to internal docker image uid? securityContext:
runAsUser: 101
runAsGroup: 101
fsGroup: 101
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
add: [ "CAP_NICE", "CAP_IPC_LOCK" ] If we can't change securityContext could we add env:
- name: CLIKCHOUSE_UID
value: 1000730000 to Something like that should work spec:
defaults:
templates:
podTemlate: custom-uid
templates:
podTemplates:
- name: custom-uid
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:latest
env:
- name: CLIKCHOUSE_UID
value: 1000730000
- name: CLIKCHOUSE_GID
value: 1000730000 |
Tried below snippet but adding below snippet resulted in pending instance creation
even after 3 mins it remains in pending state while without the above snippet , it got created in few sec. |
which
|
|
apply changes from #1464 (comment) and when status will share
|
No success. Even after 65m , no update .
One thing i notice that "podTemlate: custom-uid" is not missing in respective section. |
check clickhouse-operator logs
check
and |
also check |
In operator log , got the issue
which is fixed by adding quotes in env values. But again same issue while starting the pod
|
need to figure out, which component added security context in your OpenShift? |
|
I think default "PodSecurityPolicy" is enabled. I don't have any other security / policy enforcer tool installed. |
|
let's apply CLICKHOUSE_DO_NOT_CHOWN=1 spec:
defaults:
templates:
podTemlate: custom-uid
templates:
podTemplates:
- name: custom-uid
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:latest
env:
- name: CLIKCHOUSE_UID
value: "1000730000"
- name: CLIKCHOUSE_GID
value: "1000730000"
- name: CLICKHOUSE_DO_NOT_CHOWN
value: "1" |
same issue |
@Rajpratik71 , is it resolved? What is a reason of CrashLoopBackOff? (you should be able to see it in container logs) Do you need clickhouse-logs container, btw? It is rarely useful |
@alex-zaitsev same issue. getting below in log
|
@Rajpratik71 , have you tried adding security context as suggested above?
Could you post here full CHI spec deleting sensitive data? |
Have ran into the same issue when deploying on OpenShift. I have tried the above (fixed the spelling mistakes) without success. It might be related to this issue ClickHouse/ClickHouse#59141 as well. |
This works as we are running the intended UID and GID for the docker container. Unfortunately this means that we will need a custom scc (or anyuid) to run the pod. Hopefully the above-mentioned issue gets fixed so that we can run the docker container as non-root. |
For those on OpenShift, you can try this workaround for every namespace
kubectl create sa clickhouse -n test-clickhouse
oc adm policy add-scc-to-user anyuid -z clickhouse -n test-clickhouse
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: clickhouse
namespace: test-clickhouse
spec:
defaults:
templates:
podTemplate: custom-uid
configuration:
clusters:
- name: clickhouse
layout:
shardsCount: 1
replicasCount: 1
templates:
podTemplates:
- name: custom-uid
spec:
securityContext:
runAsUser: 101
runAsGroup: 101
fsGroup: 101
allowPrivilegeEscalation: false
serviceAccountName: clickhouse
automountServiceAccountToken: false |
@keyute thanks for openshift workaround, let's close issue |
Operator install is success.
When tried to deploy "ClickHouse" Instance using below yaml
getting below error while starting the pod
The text was updated successfully, but these errors were encountered: