Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install Rook/Ceph which supports CSI volume snapshot in preview environment #10718

Merged
merged 4 commits into from
Jun 17, 2022

Conversation

jenting
Copy link
Contributor

@jenting jenting commented Jun 17, 2022

Description

Install Rook/Ceph v1.9.5 in the preview environment which supports CSI volume snapshot.
We overwrite some default values:

Related Issue(s)

Fixes #10201

How to test

  1. Open the URL https://jenting-in5edb98e0f2.preview.gitpod-dev.com/workspaces
  2. Enable the feature flag persistent_volume_claim in the admin panel.
  3. Launch a workspace, for example, launch github.com/gitpod-io/gitpod.
  4. Write some data into it.
  5. Stop the workspace.
  6. Launch the workspace again, and make sure the data is persistent.

Release Notes

None

Documentation

None

Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-jenting-install-a-storage-vendor-10201.22 because the annotations in the pull request description changed
(with .werft/ from main)

Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com>
@jenting jenting marked this pull request as ready for review June 17, 2022 09:52
@jenting jenting requested review from sagor999 and a team June 17, 2022 09:52
Copy link
Contributor

@sagor999 sagor999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code LGTM
but adding hold until I can test it (currently not an admin in this preview env, so cannot enable feature flag for PVC use)
/hold

Comment on lines +79 to +81
- name: storage
disk:
bus: virtio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this breaks VMs - I've reverted the one pushed to master here: #10735

Copy link
Contributor

@vulkoingim vulkoingim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, besides the change to virtio I commented on 👍

I trust that you've tested it :)

@meysholdt
Copy link
Member

hi @jenting, thx for this PR!

I see you're adding a 2nd volume to all preview VMs... can you share how this is helpful?
For the sake of simplicity, would it work as well if Rook/Ceph would use folders on the root volume?

@sagor999
Copy link
Contributor

Tested, works like a charm. Removing my hold.
/unhold

@roboquat roboquat merged commit 6d39f0e into main Jun 17, 2022
@roboquat roboquat deleted the jenting/install-a-storage-vendor-10201 branch June 17, 2022 16:46
@sagor999
Copy link
Contributor

Why did it merge??? There was still a pending review required from @meysholdt
@gitpod-io/platform

@ArthurSens
Copy link
Contributor

ArthurSens commented Jun 17, 2022

I believe Aleks review already counted as platform's approval 🙂

@jenting
Copy link
Contributor Author

jenting commented Jun 19, 2022

hi @jenting, thx for this PR!

I see you're adding a 2nd volume to all preview VMs... can you share how this is helpful? For the sake of simplicity, would it work as well if Rook/Ceph would use folders on the root volume?

The Rook/Ceph deprecated directory-based storage, ref to issue.

According to the Roo/Ceph requirements, at least one of these local storage options is required:

  • Raw devices (no partitions or formatted filesystems)
  • Raw partitions (no formatted filesystem)

From the current VM image, the block device information is

$ lsblk -f
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
loop0
     squash 4.0                                                    0   100% /snap/core20/1405
loop1
     squash 4.0                                                    0   100% /snap/core18/2409
loop2
     squash 4.0                                                    0   100% /snap/google-cloud-sdk/243
loop3
     squash 4.0                                                    0   100% /snap/lxd/22923
loop4
     squash 4.0                                                    0   100% /snap/snapd/15534
loop5
     squash 4.0                                                    0   100% /snap/snapd/15904
sda                                                                         
├─sda1
│    xfs          root  4aa52ca2-508b-425c-b595-bc1667b2e800  164.4G    18% /var/lib/kubelet/pods/c700ec62-dcec-4434-90c5-64b9b29b5aa5/volume-subpaths/config/mysql/2
│                                                                           /var/lib/kubelet/pods/471099cc-7e33-4bd9-b41a-cb771f6ac5fe/volume-subpaths/configuration/image-builder-mk3/0
│                                                                           /var/lib/kubelet/pods/791bb338-c72b-4178-87ae-76c61991ca37/volume-subpaths/config/public-api-server/0
│                                                                           /var/lib/kubelet/pods/471099cc-7e33-4bd9-b41a-cb771f6ac5fe/volume-subpaths/gitpod-ca-certificate/update-ca-certificates/1
│                                                                           /var/lib/kubelet/pods/51cccdf7-1791-4593-901a-bc354bbf48c0/volume-subpaths/gitpod-ca-certificate/update-ca-certificates/1
│                                                                           /var/lib/kubelet/pods/78f92750-bf76-4999-9a46-26223fe90835/volume-subpaths/config/fluent-bit/1
│                                                                           /var/lib/kubelet/pods/78f92750-bf76-4999-9a46-26223fe90835/volume-subpaths/config/fluent-bit/0
│                                                                           /
├─sda14
│                                                                           
└─sda15
     vfat   FAT32 EFI   71CA-7080                              99.1M     5% /boot/efi
vda  iso966 Jolie cidata
                        2022-06-08-02-01-51-00                              

We could consider using sda14 or sda15, however, after deploying Rook/Ceph, the sda14 and sda15 aren't fulfilled the Rook/Ceph basic requirements, here is the log and ceph status output.

2022-06-07 15:39:08.572196 I | cephosd: skipping device "sda14": ["Insufficient space (<5GB)"].
2022-06-07 15:39:08.572246 I | cephosd: skipping device "sda15" with mountpoint "efi"
2022-06-07 15:39:08.572252 I | cephosd: skipping device "vda" because it contains a filesystem "iso9660"

IMG_2819

Therefore, without changing the existing partitions, the feasible way is the mount the 2nd block device as storage.
We choose 30Gi because the Gitpod regular workspace disk size is 30Gi now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Install a storage vendor which supports CSI snapshot in preview env
6 participants