Skip to content
This repository has been archived by the owner on Jul 16, 2018. It is now read-only.

have a gofabric8 erase-pvc command #598

Closed
jstrachan opened this issue Oct 5, 2017 · 4 comments
Closed

have a gofabric8 erase-pvc command #598

jstrachan opened this issue Oct 5, 2017 · 4 comments
Assignees

Comments

@jstrachan
Copy link
Contributor

when working with fabric8 its very common for keycloak to barf. e.g. reboot your laptop and try to restart your minikube/minishift VM and keycloak won't start. WIT has similar issues too when you recreate the PVC for keycloak.

So it'd be handy to have a gofabric8 erase-pvc command like this...

$ gofabric8 erase-pvc keycloak-db-postgresql-data
persistentvolumeclaim "keycloak-db-postgresql-data" deleted
persistentvolumeclaim "keycloak-db-postgresql-data" created
pod "keycloak-db-3064796942-4vhn2" deleted
pod "keycloak-3463143011-1jhkc" deleted

i.e. it'd find the PVC; delete it, then recreate an empty version which has the same labels and spec but removing the spec. volumeName value (so its not bound to the PV) and with no status. Also remove the annotations added by the storage implementation like these:

    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"e3faf84f-a9b8-11e7-bc26-0ec1a5fcab50","leaseDurationSeconds":15,"acquireTime":"2017-10-05T10:41:51Z","renewTime":"2017-10-05T10:41:53Z","leaderTransitions":0}'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"group":"io.fabric8.apps","project":"keycloak-db","provider":"fabric8","version":"1.0.2"},"name":"keycloak-db-postgresql-data","namespace":"fabric8"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath

Then extra bonus points for also deleting all pods which reference this PVC. Even more brownie points for also deleting all pods which depend on those pods (e.g. keycloak DB and keycloak pods - ditto for WIT etc) though for now only immediate dependencies would probably do the trick

@jstrachan
Copy link
Contributor Author

@chmouel fancy having a go?

@jstrachan
Copy link
Contributor Author

this would mean after a reboot folks would just have to type gofabric8 erase-pvc keycloak-db-postgresql-data and hopefully keycloak would eventually startup again. They'd have to login again from scratch mind; but at least KC would startup so they could login

@chmouel chmouel self-assigned this Oct 5, 2017
@chmouel
Copy link
Contributor

chmouel commented Oct 17, 2017

So seems like if we have this original pvc :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    maven.fabric8.io/source-url: jar:file:/home/jenkins/workspace/8io_fabric8-platform_master-4P5FOSFKYBLAPGDO7GHHNEOGKKERYH26KXBFORI5V7MRVJFY3QWA/apps/keycloak-db/target/keycloak-db-4.0.204.jar!/META-INF/fabric8/openshift.yml
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: 2017-10-02T14:57:42Z
  labels:
    app: keycloak-db
    group: io.fabric8.platform.apps
    provider: fabric8
    version: 4.0.204
  name: keycloak-db-postgresql-data
  namespace: fabric8
  resourceVersion: "1305"
  selfLink: /api/v1/namespaces/fabric8/persistentvolumeclaims/keycloak-db-postgresql-data
  uid: 0af44356-a782-11e7-97d7-fa163e649f97
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: pv0001
status:
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  - ReadOnlyMany
  capacity:
    storage: 100Gi
  phase: Bound

We would strip it down and keep only these fields :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    maven.fabric8.io/source-url: jar:file:/home/jenkins/workspace/8io_fabric8-platform_master-4P5FOSFKYBLAPGDO7GHHNEOGKKERYH26KXBFORI5V7MRVJFY3QWA/apps/keycloak-db/target/keycloak-db-4.0.204.jar!/META-INF/fabric8/openshift.yml
  labels:
    app: keycloak-db
    group: io.fabric8.platform.apps
    provider: fabric8
    version: 4.0.204
  name: keycloak-db-postgresql-data
  namespace: fabric8
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Should we keep the creationDate too?

@jstrachan
Copy link
Contributor Author

sounds good! I'd trash the creationDate too really

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants