-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PVC and PV are not re-used when redeploying/updating MM #260
Comments
Just to confirm, if you use the helm chart to upgrade your Mattermost instance then all data is lost (configuration, messages, uploaded files etc)? |
Messages are stored in the DB and the config is living in a Only files are/were lost for me. |
do you have a step to reproduce the issue? |
Simply changing the image tag and redeploying should suffice. At least this is what caused this twice in our instance. But in general everything that forces a pod recreation should be able to trigger this as the PVC were not re-used for new pods. |
i will try to reproduce the issue, just one final question, which chart are you using? the team edition or the enterprise one? @pat-s |
Thanks. Team edition. |
@pat-s my data and the custom installed plugin were in place. |
pressed the wrong button to comment. |
They were not deleted on my side, just not re-used. I.e. the chart created new PV and PVCs, which had no content then. I can try to replicate the behaviour again if needed on our dev cluster. |
i ran into a different issue - just some experience from that. helm upgrade reused every time the PVC volumes from before - so all is working as expected from the helm chart and helm terraforms helm provider seems to be the issue on your case @pat-s I bumped into the error after cancelling a previous run (the cloud provider stack on fulfilling a PVC request). That left some orphan resources (svc,cm,sa,...), after deleting them I could re-run the apply just fine. Originally posted by @marpada in hashicorp/terraform-provider-helm#425 (comment) |
I am still hitting this issue and can still reproduce it (using TF + helm). It does not necessarily happen when updating the chart but when (for some reason) removing the deployment and creating a new deployment (with the same settings). This bugs me quite a bit and this helm chart is the only one within ~ 20 I am maintaining across different clusters which has this issue. The only workable solution seems to be to not let the chart creat the PVC but use terraform and reference to an existingClaim in the helm chart. In the MM helm chart I reference an existing PVC like so: persistence:
# location: /mattermost/data
data:
enabled: true
size: 10Gi
storageClass: ebs-sc
accessMode: ReadWriteOnce
existingClaim: "mattermost-team-edition"
# location: /mattermost/client/plugins
plugins:
enabled: true
size: 1Gi
##
storageClass: ebs-sc
accessMode: ReadWriteOnce
existingClaim: "mattermost-team-edition-plugins" The PVC and corresponding PV are created (or in my case imported) by terraform. resource "kubernetes_persistent_volume_claim" "mattermost-team-edition-plugins" {
metadata {
name = "mattermost-team-edition-plugins"
namespace = "mattermost"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "ebs-sc"
resources {
requests = {
storage = "1Gi"
}
}
volume_name = "pvc-42ffb542-9abf-4a30-80e1-af8ecd98ca05"
}
}
resource "kubernetes_persistent_volume_claim" "mattermost-team-edition" {
metadata {
name = "mattermost-team-edition"
namespace = "mattermost"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "ebs-sc"
resources {
requests = {
storage = "10Gi"
}
}
volume_name = "pvc-4fe296f3-cef0-4e7a-b366-9217f19afe6b"
}
}
resource "kubernetes_persistent_volume" "pvc-42ffb542-9abf-4a30-80e1-af8ecd98ca05" {
metadata {
name = "pvc-42ffb542-9abf-4a30-80e1-af8ecd98ca05"
}
spec {
capacity = {
storage = "1Gi"
}
access_modes = ["ReadWriteOnce"]
persistent_volume_reclaim_policy = "Retain"
storage_class_name = "ebs-sc"
persistent_volume_source {
csi {
driver = "ebs.csi.aws.com"
fs_type = "ext4"
read_only = false
volume_handle = "vol-0a2bcb6a8c26cee79"
volume_attributes = {
"storage.kubernetes.io/csiProvisionerIdentity" : "1657554492068-8081-ebs.csi.aws.com"
}
}
}
}
}
resource "kubernetes_persistent_volume" "pvc-4fe296f3-cef0-4e7a-b366-9217f19afe6b" {
metadata {
name = "pvc-4fe296f3-cef0-4e7a-b366-9217f19afe6b"
}
spec {
capacity = {
storage = "10Gi"
}
access_modes = ["ReadWriteOnce"]
persistent_volume_reclaim_policy = "Retain"
storage_class_name = "ebs-sc"
persistent_volume_source {
csi {
driver = "ebs.csi.aws.com"
fs_type = "ext4"
read_only = false
volume_handle = "vol-05e73ef89055b933c"
volume_attributes = {
"storage.kubernetes.io/csiProvisionerIdentity" : "1657554492068-8081-ebs.csi.aws.com"
}
}
}
}
} |
Chart version: 5.4.0
K8s version: 1.21.4
When redeploying or updating, the helm chart creates a new PV and a new PVC claim instead of re-using the old ones.
A workaround is to set
persistence.data.existingClaim
referencing a PVC name which claims an existing PV which contains the data to fix that.However this requires to have existing PV and PVC and I assume this won't work on the first deployment in which both are not yet existent.
Also this looks like a bug to me as I'd expect the chart to re-use the previous PVC.
More specifically, I am using the following TF setup now which works across updates
The same applies also to the PVC for plugins.
This might also relate to #200 and #251.
The text was updated successfully, but these errors were encountered: