-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install Rook/Ceph which supports CSI volume snapshot in preview environment #10718
Conversation
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
started the job as gitpod-build-jenting-install-a-storage-vendor-10201.22 because the annotations in the pull request description changed |
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code LGTM
but adding hold until I can test it (currently not an admin in this preview env, so cannot enable feature flag for PVC use)
/hold
- name: storage | ||
disk: | ||
bus: virtio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this breaks VMs - I've reverted the one pushed to master here: #10735
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, besides the change to virtio
I commented on 👍
I trust that you've tested it :)
hi @jenting, thx for this PR! I see you're adding a 2nd volume to all preview VMs... can you share how this is helpful? |
Tested, works like a charm. Removing my hold. |
Why did it merge??? There was still a pending review required from @meysholdt |
I believe Aleks review already counted as platform's approval 🙂 |
The Rook/Ceph deprecated directory-based storage, ref to issue. According to the Roo/Ceph requirements, at least one of these local storage options is required:
From the current VM image, the block device information is $ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0
squash 4.0 0 100% /snap/core20/1405
loop1
squash 4.0 0 100% /snap/core18/2409
loop2
squash 4.0 0 100% /snap/google-cloud-sdk/243
loop3
squash 4.0 0 100% /snap/lxd/22923
loop4
squash 4.0 0 100% /snap/snapd/15534
loop5
squash 4.0 0 100% /snap/snapd/15904
sda
├─sda1
│ xfs root 4aa52ca2-508b-425c-b595-bc1667b2e800 164.4G 18% /var/lib/kubelet/pods/c700ec62-dcec-4434-90c5-64b9b29b5aa5/volume-subpaths/config/mysql/2
│ /var/lib/kubelet/pods/471099cc-7e33-4bd9-b41a-cb771f6ac5fe/volume-subpaths/configuration/image-builder-mk3/0
│ /var/lib/kubelet/pods/791bb338-c72b-4178-87ae-76c61991ca37/volume-subpaths/config/public-api-server/0
│ /var/lib/kubelet/pods/471099cc-7e33-4bd9-b41a-cb771f6ac5fe/volume-subpaths/gitpod-ca-certificate/update-ca-certificates/1
│ /var/lib/kubelet/pods/51cccdf7-1791-4593-901a-bc354bbf48c0/volume-subpaths/gitpod-ca-certificate/update-ca-certificates/1
│ /var/lib/kubelet/pods/78f92750-bf76-4999-9a46-26223fe90835/volume-subpaths/config/fluent-bit/1
│ /var/lib/kubelet/pods/78f92750-bf76-4999-9a46-26223fe90835/volume-subpaths/config/fluent-bit/0
│ /
├─sda14
│
└─sda15
vfat FAT32 EFI 71CA-7080 99.1M 5% /boot/efi
vda iso966 Jolie cidata
2022-06-08-02-01-51-00 We could consider using sda14 or sda15, however, after deploying Rook/Ceph, the sda14 and sda15 aren't fulfilled the Rook/Ceph basic requirements, here is the log and 2022-06-07 15:39:08.572196 I | cephosd: skipping device "sda14": ["Insufficient space (<5GB)"].
2022-06-07 15:39:08.572246 I | cephosd: skipping device "sda15" with mountpoint "efi"
2022-06-07 15:39:08.572252 I | cephosd: skipping device "vda" because it contains a filesystem "iso9660" Therefore, without changing the existing partitions, the feasible way is the mount the 2nd block device as storage. |
Description
Install Rook/Ceph v1.9.5 in the preview environment which supports CSI volume snapshot.
We overwrite some default values:
gitpod/.werft/vm/manifests/rook-ceph/operator.yaml
Line 30 in d19c9b5
gitpod/.werft/vm/manifests/rook-ceph/storageclass-test.yaml
Line 55 in d19c9b5
gitpod/.werft/vm/manifests/rook-ceph/storageclass-test.yaml
Line 57 in d19c9b5
Related Issue(s)
Fixes #10201
How to test
persistent_volume_claim
in the admin panel.Release Notes
Documentation
None