-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/ghost] Pods fail on init with GKE Autopilot and K8s Docker Desktop #25439
Comments
Hi, About this error
It seems to me that the resource preset in your platform is not enough and requires a higher limit (which is strange as we test this chart in all major k8s distros), could you setting higher resource limits in the |
While GKE clusters are mostly likely on your major k8s distro list, perhaps the testing is on a GKE "standard" cluster, and the delineation of "GKE Autopilot" has not been attempted. My previous testing was without a sufficient understanding of the behaviors of ephemeral-storage as a resource with requests and limits. Considering your suggestion I provided the chart with these override defaults, with ample storage. # micro
resources:
limits:
cpu: 375m
ephemeral-storage: 5Gi
memory: 384Mi
requests:
cpu: 250m
ephemeral-storage: 2Gi
memory: 256Mi
# micro
volumePermissions:
resources:
limits:
cpu: 375m
ephemeral-storage: 5Gi
memory: 384Mi
requests:
cpu: 250m
ephemeral-storage: 2Gi
memory: 256Mi
# small
mysql:
primary:
resources:
limits:
cpu: 700m
ephemeral-storage: 5Gi
memory: 768Mi
requests:
cpu: 500m
ephemeral-storage: 2Gi
memory: 512Mi The MySQL pod appears to be healthy and logging appropriately. However, the ghost pod repeatedly tries to start and is evicted with this error: Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m gke.io/optimize-utilization-scheduler Successfully assigned ghost/ghost-8fcc8fbf5-ztmdt to gk3-autopilot-cluster-1-pool-2-9a2f203f-hql9
Normal Pulled 17m kubelet Container image "docker.io/bitnami/ghost:5.82.5-debian-12-r0" already present on machine
Normal Created 17m kubelet Created container prepare-base-dir
Normal Started 17m kubelet Started container prepare-base-dir
Normal Pulled 16m kubelet Container image "docker.io/bitnami/ghost:5.82.5-debian-12-r0" already present on machine
Normal Created 16m kubelet Created container ghost
Normal Started 16m kubelet Started container ghost
Warning Evicted 16m kubelet Pod ephemeral local storage usage exceeds the total limit of containers 50Mi.
Normal Killing 16m kubelet Stopping container ghost
Warning ExceededGracePeriod 16m kubelet Container runtime did not kill the pod within specified grace period. Despite asking for 2Gi, there appears to be a hard limit on the GKE node which perhaps forces the Pods to have a 50Mi ceiling. This is evident as the Pod description reflects this 50Mi cap.: Limits:
cpu: 375m
ephemeral-storage: 50Mi
memory: 384Mi
Requests:
cpu: 250m
ephemeral-storage: 50Mi
memory: 256Mi My understanding is there are a few solutions:
However, I'm confused because I was running a Helm chart for Ollama and was hitting the exact same eviction error related to ephemeral-storage. I increased the chart resource defaults with this: resources:
limits:
ephemeral-storage: 5Gi
requests:
ephemeral-storage: 5Gi
memory: 2Gi and the problem went away and Ollama is healthy. With the Pod reporting the ephemeral-storage at 5Gi as requested: Requests:
cpu: 308m
ephemeral-storage: 5Gi
memory: 2Gi This tells me I do not understand Autopilot or assume that's what limits the ephemeral-storage request. Why would the Bitnami Ghost deployment be capped at 50Mi, yet this other chart has no problem requesting 5Gi? Thoughts? |
Hi, This is weird, as we do not set any kind of cap in the resources. Could you confirm that the rendered YAML indeed sets the ephemeral-storage to 5Gi? As this 50Mi could be related to the resources section not being rendered correctly |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
Name and Version
bitnami/ghost 20.0.3
What architecture are you using?
amd64
What steps will reproduce the bug?
This is with a GKE Autopilot K8s cluster with the GKE Autopilot defaults.
helm install ghost bitnami/ghost -n ghost --create-namespace --version 20.0.3
then follow the Helm output instructions:
kubectl get svc --namespace ghost -w ghost
Wait for external ip, then:
Then apply the upgrade:
helm upgrade --namespace ghost ghost oci://registry-1.docker.io/bitnamicharts/ghost --set service.type=LoadBalancer,ghostHost=$APP_HOST,ghostPassword=$GHOST_PASSWORD,mysql.auth.rootPassword=$MYSQL_ROOT_PASSWORD,mysql.auth.password=$MYSQL_PASSWORD
Are you using any custom parameters or values?
Occurs with just the chart defaults.
Several attempts with other parameters such as setting "ephemeral-storage: 10Mi" or attempting to set ephemeral storage to another volume, but those did not work or were not configured correctly.
What is the expected behavior?
Pods and other objects remain healthy and running without restarts and the Ghost web access is available.
What do you see instead?
The result is:
Additional information
Kubernetes on GKE Autopilot
This is with a GKE Autopilot K8s cluster with the GKE Autopilot defaults.
Same problem happens consistently with chart versions 20.0.3 (current) and 19.11.7.
MySQL appears healthy and running after the first install. On the helm upgrade with ghost initializing this is when the errors start.
The mysql pod reports:
[ERROR] [MY-013178] [Server] Execution of server-side SQL statement 'ALTER TABLE user MODIFY ssl_type enum('','ANY','X509', 'SPECIFIED') NOT NULL; ' failed with error code = 3664, error message = 'Failed to delete SDI 'mysql.user' in tablespace 'mysql'.'.
The ghost pod reports
message: 'Pod ephemeral local storage usage exceeds the total limit of containers 50Mi.
More details:
mysql pod:
ghost pod: (see status: section)
ghost pod
mysql pod
The text was updated successfully, but these errors were encountered: