-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go SDK upgrade performing actions in the wrong namespace #8685
Comments
Check and make sure the kube client used by the upgrade action is pointing at the correct namespace. By default, it will use the kubeconfig's default namespace, which is usually Lines 61 to 62 in 04fb358
Lines 130 to 138 in 04fb358
|
Well, I have set the namespace to upgradeClient := action.NewUpgrade(actionConfig)
upgradeClient.Namespace = "kafka"
upgradeClient.Atomic = true
upgradeClient.ReuseValues = true
log.Printf("%+v", upgradeClient) This code actually says that the namespace is
|
Is it possible that the resources are hard-coding a namespace parameter? Looking at Lines 181 to 197 in 04fb358
It would appear that the namespace is coming from the resource rather than the kube client. Check the output of |
What I did also is logging the
I am trying to figure out why with the Helm CLI all works fine but in my code it doesn't. If it would fail with the Helm CLI I wouldn't have spent the time on this for sure. |
The output of ---
# Source: kafka/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kafka
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kafka
---
# Source: kafka/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-scripts
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
data:
setup.sh: |-
#!/bin/bash
ID="${MY_POD_NAME#"kafka-"}"
export KAFKA_CFG_BROKER_ID="$ID"
exec /entrypoint.sh /run.sh
---
# Source: kafka/charts/zookeeper/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-zookeeper-headless
namespace: kafka
labels:
app.kubernetes.io/name: zookeeper
helm.sh/chart: zookeeper-5.21.5
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: zookeeper
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: tcp-client
port: 2181
targetPort: client
- name: follower
port: 2888
targetPort: follower
- name: tcp-election
port: 3888
targetPort: election
selector:
app.kubernetes.io/name: zookeeper
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: zookeeper
---
# Source: kafka/charts/zookeeper/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-zookeeper
namespace: kafka
labels:
app.kubernetes.io/name: zookeeper
helm.sh/chart: zookeeper-5.21.5
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: zookeeper
spec:
type: ClusterIP
ports:
- name: tcp-client
port: 2181
targetPort: client
- name: follower
port: 2888
targetPort: follower
- name: tcp-election
port: 3888
targetPort: election
selector:
app.kubernetes.io/name: zookeeper
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: zookeeper
---
# Source: kafka/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kafka
spec:
type: ClusterIP
clusterIP: None
ports:
- name: tcp-client
port: 9092
protocol: TCP
targetPort: kafka-client
- name: tcp-internal
port: 9093
protocol: TCP
targetPort: kafka-internal
selector:
app.kubernetes.io/name: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: kafka
---
# Source: kafka/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kafka
spec:
type: ClusterIP
ports:
- name: tcp-client
port: 9092
protocol: TCP
targetPort: kafka-client
nodePort: null
selector:
app.kubernetes.io/name: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: kafka
---
# Source: kafka/charts/zookeeper/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka-zookeeper
namespace: kafka
labels:
app.kubernetes.io/name: zookeeper
helm.sh/chart: zookeeper-5.21.5
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: zookeeper
role: zookeeper
spec:
serviceName: kafka-zookeeper-headless
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: zookeeper
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: zookeeper
template:
metadata:
name: kafka-zookeeper
labels:
app.kubernetes.io/name: zookeeper
helm.sh/chart: zookeeper-5.21.5
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: zookeeper
spec:
securityContext:
fsGroup: 1001
containers:
- name: zookeeper
image: docker.io/bitnami/zookeeper:3.6.1-debian-10-r88
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
command:
- bash
- -ec
- |
# Execute entrypoint as usual after obtaining ZOO_SERVER_ID based on POD hostname
HOSTNAME=`hostname -s`
if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
ORD=${BASH_REMATCH[2]}
export ZOO_SERVER_ID=$((ORD+1))
else
echo "Failed to get index from hostname $HOST"
exit 1
fi
exec /entrypoint.sh /run.sh
resources:
requests:
cpu: 250m
memory: 256Mi
env:
- name: ZOO_DATA_LOG_DIR
value: ""
- name: ZOO_PORT_NUMBER
value: "2181"
- name: ZOO_TICK_TIME
value: "2000"
- name: ZOO_INIT_LIMIT
value: "10"
- name: ZOO_SYNC_LIMIT
value: "5"
- name: ZOO_MAX_CLIENT_CNXNS
value: "60"
- name: ZOO_4LW_COMMANDS_WHITELIST
value: "srvr, mntr, ruok"
- name: ZOO_LISTEN_ALLIPS_ENABLED
value: "no"
- name: ZOO_AUTOPURGE_INTERVAL
value: "0"
- name: ZOO_AUTOPURGE_RETAIN_COUNT
value: "3"
- name: ZOO_MAX_SESSION_TIMEOUT
value: "40000"
- name: ZOO_SERVERS
value: kafka-zookeeper-0.kafka-zookeeper-headless.kafka.svc.cluster.local:2888:3888
- name: ZOO_ENABLE_AUTH
value: "no"
- name: ZOO_HEAP_SIZE
value: "1024"
- name: ZOO_LOG_LEVEL
value: "ERROR"
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
ports:
- name: client
containerPort: 2181
- name: follower
containerPort: 2888
- name: election
containerPort: 3888
livenessProbe:
exec:
command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- name: data
mountPath: /bitnami/zookeeper
volumes:
volumeClaimTemplates:
- metadata:
name: data
annotations:
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: kafka/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kafka
spec:
podManagementPolicy: Parallel
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/component: kafka
serviceName: kafka-headless
updateStrategy:
type: "RollingUpdate"
template:
metadata:
labels:
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-11.8.2
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kafka
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccountName: kafka
containers:
- name: kafka
image: docker.io/bitnami/kafka:2.6.0-debian-10-r0
imagePullPolicy: "IfNotPresent"
command:
- /scripts/setup.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: "kafka-zookeeper"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL"
- name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT"
- name: KAFKA_CFG_LISTENERS
value: "INTERNAL://:9093,CLIENT://:9092"
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: "INTERNAL://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9092"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_DELETE_TOPIC_ENABLE
value: "false"
- name: KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE
value: "true"
- name: KAFKA_HEAP_OPTS
value: "-Xmx1024m -Xms1024m"
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES
value: "10000"
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS
value: "1000"
- name: KAFKA_CFG_LOG_RETENTION_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS
value: "300000"
- name: KAFKA_CFG_LOG_RETENTION_HOURS
value: "168"
- name: KAFKA_CFG_MESSAGE_MAX_BYTES
value: "1000012"
- name: KAFKA_CFG_LOG_SEGMENT_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_DIRS
value: "/bitnami/kafka/data"
- name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_CFG_NUM_IO_THREADS
value: "8"
- name: KAFKA_CFG_NUM_NETWORK_THREADS
value: "3"
- name: KAFKA_CFG_NUM_PARTITIONS
value: "1"
- name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR
value: "1"
- name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: "104857600"
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
value: "6000"
ports:
- name: kafka-client
containerPort: 9092
- name: kafka-internal
containerPort: 9093
livenessProbe:
tcpSocket:
port: kafka-client
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold:
periodSeconds:
successThreshold:
readinessProbe:
tcpSocket:
port: kafka-client
initialDelaySeconds: 5
timeoutSeconds: 5
failureThreshold: 6
periodSeconds:
successThreshold:
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/kafka
- name: scripts
mountPath: /scripts/setup.sh
subPath: setup.sh
volumes:
- name: scripts
configMap:
name: kafka-scripts
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
|
I just confirmed that if I change the default namespace in my kube config, then everything starts working as expected. It seems the issue is with setting the correct namespace for the kube client indeed but I am not sure where it is going wrong. All my code is the few lines I posted above which I took from the cli package |
Just checking in here. Did you happen to figure out what may be causing the issue? |
Not at all. I banged my head against it for 2 days and in the end I gave up... Either it's something very obscure that is happening or it's something really obvious that I am just not able to see. I deployed my app in k8s and I noticed it behaves in the same way also when it is working from inside my cluster. Since I have to upgrade 2 charts that are in 2 different namespaces, I ended up deploying my app 2 times - 1 for each namespace and that works. Normally I would expect it to work with just a single app that can handle all namespaces correctly but for some reason the upgrade still decides to create stateful sets in the |
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs. |
closing as stale. |
Just a note to anybody who comes across this issue. The solution proposed in #9171 worked for me (e.g., setting |
#9171 didn't solve the problem for me. What seems to work is the solution of Hypher
|
Output of
helm version
:Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.): Rancher on bare-metal
I am writing an application using the Helm Go SDK which is supposed to upgrade an already deployed helm chart on my cluster. The chart is
bitnami/kafka
and is deployed in namespacekafka
. All works well apart from the fact that for some reason when I actually perform the upgrade, helm starts creating StatefulSets and ConfigMaps in my default namespace and deletes things from thekafka
namespace. I already spent 1 day investigating this issue and I am nowhere so some help would be appreciated.kafka
)P.S.: The output of
helm upgrade -n kafka --reuse-values --set replicaCount=4 kafka bitnami/kafka --debug
The text was updated successfully, but these errors were encountered: