Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What format should the underlying "raw format device be in?" #393

Closed
gilgameshskytrooper opened this issue Nov 15, 2017 · 7 comments

Comments

@gilgameshskytrooper
Copy link

commented Nov 15, 2017

Before I start, thanks for the help last time.

Now I'm back to the grind with some fresh hardware (2 SATA internal HDD's for the worker nodes, and a usb external drive for the master node, all on Ubuntu 16.04 LTS).

To start, I attempted to format all the drives as ext4 via the command mkfs.ext4 /dev/sdb. However, during deployment, the script hangs up on creating the first node.

Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
clusterrolebinding "heketi-sa-view" labeled
OK
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... ^[^[OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: e75792262e403db1cfcfbebdd6894f54
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node kraken ... ID: 49c65133271a827fff1b1e1d8315bdf3
^C⏎
 !  ~/gluster-kubernetes   master *…  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy --abort                                                                                    11.1m  Tue 14 Nov 2017 07:10:26 PM CST
Using Kubernetes CLI.
Using namespace "default".
deployment "deploy-heketi" deleted
pod "deploy-heketi-5c45f969bd-zsd6m" deleted
service "deploy-heketi" deleted
secret "heketi-config-secret" deleted
serviceaccount "heketi-service-account" deleted
clusterrolebinding "heketi-sa-view" deleted
No resources found
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" deleted
 !  ~/gluster-kubernetes   master *…  deploy  cd ~/gluster-kubernetes/deploy; and ./gk-deploy -gy topology.json                                                                               1.3m  Tue 14 Nov 2017 07:18:49 PM CST
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
clusterrolebinding "heketi-sa-view" labeled
OK
node "kraken" labeled
node "kraken01" labeled
node "kraken02" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 4a985ab2336cdab165dc3f500d29bbb6
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node kraken ... ID: 81ad9ef7ce077169432aafc4a2814455

Next, since all the documents regarding setting up GlusterFS directly on top of bare metal says the underlying structure should be xfs, i reformatted the drives as xfs using the following commands:

$sudo su
$mkfs.xfs /dev/sdb
$fdisk /dev/sdb
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

/dev/sdb: device contains a valid 'xfs' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x177938dc.

Command (m for help): wipefs /dev/sdb
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

where wipefs /dev/sdb is a command I typed

However, I get the same exact hang up when running ./gk-deploy -gy topology.json.

What should I format the underlying storage device to be?

@jarrpa

This comment has been minimized.

Copy link
Contributor

commented Nov 15, 2017

When we mean "raw block devices" we mean there should be no formatting at all. No partitions, no filesystem, no LVM artifacts, nothing. We suggest doing wipefs -a to make sure you get everything.

@JohnStrunk

This comment has been minimized.

Copy link
Member

commented Nov 15, 2017

@gilgameshskytrooper

This comment has been minimized.

Copy link
Author

commented Nov 15, 2017

For now, this seems to work (i.e., ./gk-deploy can initialize all the nodes without a problem)

@gilgameshskytrooper

This comment has been minimized.

Copy link
Author

commented Nov 18, 2017

I have no further issues arising from GlusterFS! Thanks such much @jarrpa and @JohnStrunk for all your help!

@jayunit100

This comment has been minimized.

Copy link

commented Jun 26, 2018

What if your on a cluster with no externally mounted , clean raw devices.
Since we're in containers, can gluster just use tmpfs on disk or something , from inside the containerS?

@JohnStrunk

This comment has been minimized.

Copy link
Member

commented Jun 26, 2018

Heketi assumes there is a raw device on which LVM can be used to carve bricks for volumes. There isn't really a way of just giving the Gluster pod a file system and still using dynamic provisioning.

You have a couple options:

  • Skip dynamic provisioning and manage Gluster and volumes yourself, manually creating kube PVs
  • Find a way to expose a block device, potentially via loop. Heketi would be able to LVM that, and it should work, but I have not personally tried it. Creating the topology file may also be tricky depending on how consistently the devices get named.
@phlogistonjohn

This comment has been minimized.

Copy link
Contributor

commented Jun 27, 2018

I think that @ansiwen has recently been using loopback devices successfully.

(And to be extra silly for a moment, heketi doesn't really care what's beneath the block device so you could choose put lvm on your loopback device and expose an lvm lv to heketi! I just did it to prove to myself it would work -- but perhaps this is a don't-try-this-at-home scenario :-))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.