-
Notifications
You must be signed in to change notification settings - Fork 89
Failed to create snapshot #29
Description
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
VolumeSnapshot resources were created, but no VolumeSnapshotData resources were created as a result.
What you expected to happen:
I expected the VolumeSnapshot to create the necessary resources so that a PVC created using the stork volume provisioner could bind to a snapshot of the volume references in the VolumeSnapshot.
How to reproduce it (as minimally and precisely as possible):
- Have PX volume bound to a PVC.
- Create a volume snapshot resource referencing the above PVC
- Create a PVC using the stork provisioner and reference the snapshot resource in 2.
- Try to run a Pod using the PVC from 3.
Anything else we need to know?:
Using PX 1.2.18 with runC. This same setup had been creating snapshots just fine since the start of February, not sure where it went wrong.
Environment:
- Kubernetes version (use
kubectl version): 1.8.4_coreos.0 - Cloud provider or hardware configuration: Azure with L8s instances on CoreOS Tectonic
- OS (e.g. from /etc/os-release): CoreOS 1632.3.0
- Kernel (e.g.
uname -a): 4.14.19-coreos - Install tools:
- Others:
Logs from running stork instance. Look at 2018-02-22T20:08:13Z on.
https://gist.github.com/592abd48df485ec5a55b787dd62a2099
After deleting the current stork leader, the next stork complained of malformed snapshots. And then created snapshots and the pending pods were scheduled.
https://gist.github.com/2d82e99592135bc72b793b60af4278f1
Tearing down the snapshots and trying again failed with the same stork that had recovered the first attempt. So stork 1 failed, was killed and stork 2 created snapshots. Tore down the snapshots and Pods and tried new snapshots. stork 2 failed like stork 1. Killing stork 2 and stork 3 picked up and created the snapshots.
Stork 3 was able to work through a teardown and bring up again. Its logs are https://gist.github.com/2c46a2f015799d09e9af35e9af78e6cf