New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support glusterfs #2044

Closed
MartinForReal opened this Issue Aug 17, 2018 · 14 comments

Comments

Projects
None yet
6 participants
@MartinForReal
Copy link
Contributor

MartinForReal commented Aug 17, 2018

Is this a bug report or feature request?

  • Feature Request

What should the feature do:
rook operator can provision glusterfs cluster on k8s
What is use case behind this feature:
glusterfs consumes less resource(cpu memory) but still takes time to manage a cluster
Environment:
on k8s environment.

@jbw976 jbw976 added the help wanted label Aug 17, 2018

@jbw976

This comment has been minimized.

Copy link
Member

jbw976 commented Aug 17, 2018

+1, I would love to see support for Gluster in Rook as well! There has been discussion in the past about this and I know there was some enthusiasm and demand. Adding the "help wanted" label to hopefully get an owner for this.

@MartinForReal

This comment has been minimized.

Copy link
Contributor

MartinForReal commented Aug 18, 2018

I would like to write a draft for crd definition.

@travisn

This comment has been minimized.

Copy link
Member

travisn commented Aug 18, 2018

@MartinForReal great to hear of your interest in this. Before you go too far, @JohnStrunk and @jarrpa have been working on an operator and crd for gluster and we have been discussing integration with Rook.

@jbw976 jbw976 added the gluster label Aug 18, 2018

@MartinForReal

This comment has been minimized.

Copy link
Contributor

MartinForReal commented Aug 20, 2018

I don't find any document or discussion for record. Could you please share your idea with me? @JohnStrunk @jarrpa Thank you!

@rohan47

This comment has been minimized.

Copy link
Member

rohan47 commented Aug 22, 2018

I came across this old google doc. Not sure if this is still being considered. https://docs.google.com/document/d/13-xsk0DazYCWrsgXqnEEOYMbiqQJxL1GV6dWTpqGvwM/edit#heading=h.sp44z9itvyja

@obnoxxx

This comment has been minimized.

Copy link

obnoxxx commented Aug 23, 2018

Nice to see some interest here! Let's get discussing about possible approaches and designs, etc.

@rohan47 writes:

I came across this old google doc. Not sure if this is still being considered. https://docs.google.com/document/d/13-xsk0DazYCWrsgXqnEEOYMbiqQJxL1GV6dWTpqGvwM/edit#heading=h.sp44z9itvyja

Well, there are certainly several good starting points in that doc. One of the main problems originally, were (imho) that there were a couple of misconceptions or just lack of context, where we want to move with gluster. My main point of criticism for the originial document was that its proposal tries to replace several core aspects of gluster. This would render gluster unusable outside of kubernetes, or at least you would have to spend duplicate effort to keep it usable there. Our approach from gluster is to keep the core business logic close to gluster core. That includes higher level functionality like disk management, intelligent volume provisioning and day-2 operations. In the currently released world this includes the heketi component (https://github.com/heketi/heketi), but that functionality will merge with a rewritten glusterd component of gluster (https://github.com/gluster/glusterd2). In my opinion, the operator should not implement these core aspects but reach out to them. The operator should gather the facts to take decisions on when to call which of these core functionalities. If we can agree roughly on those primitives, then we should well pursue the project of integrating a gluster operator with rook!

This is currently supposed to become the home of the gluster operator: https://github.com/gluster/anthill

And this is @jarrpa's current prototype: https://github.com/jarrpa/anthill

@bassam

This comment has been minimized.

Copy link
Member

bassam commented Aug 23, 2018

@obnoxxx the document predates glusterD2 and I agree that we should explore using glusterd2. One question that came up with the use of heketi was it's need for a backing store for it's own config. Does glusterd2 require a backing store for config too?

@obnoxxx

This comment has been minimized.

Copy link

obnoxxx commented Aug 23, 2018

@bassam writes:

@obnoxxx the document predates glusterD2 and I agree that we should explore using glusterd2.

Is the document more than 3 years old? (glusterd2 was started in 2015. ;-) )

But glusterd2 was not in a product then, and it is not yet. It will take some time until it is. It just made it into upstream gluster 4.0 as a tech perview feature, so yeah, in that sense glusterd2 is now more there than it was before.

One question that came up with the use of heketi was it's need for a backing store for it's own config.

Well, we can change almost antyhing. But heketi needs to store some state. Let me put the problem in different terms: The main problem we hat with heketi and gluster so far is that there are two places of state, one in heketi and one in gluster, and that content has some overlap. We've been working on reducing the effects.

Does glusterd2 require a backing store for config too?

It uses an etcd to store state internal to gluster.

What's the main issue with a store for internal state of the application?

@MartinForReal

This comment has been minimized.

Copy link
Contributor

MartinForReal commented Aug 26, 2018

What's the main issue with a store for internal state of the application?

I think this internal storage makes it harder for storage controller (heketi in this case) to achieve HA
I've opened heketi/heketi#1333 #

@obnoxxx

This comment has been minimized.

Copy link

obnoxxx commented Aug 27, 2018

@MartinForReal writes:

@obnoxxx writes:

What's the main issue with a store for internal state of the application?

I think this internal storage makes it harder for storage controller (heketi in this case) to achieve HA

Well first of all, heketi is not a storage controller in the more narrow kubernetes sense, but it is a component of core gluster which in this case runs inside kubernetes. :-)

Can you be more specific how this makes it more difficult to achieve HA?

We currently have two (kind of hacky) ways to achieve HA, both based on where to store that internal state:

  1. store the db on a distributed storage, in our case a gluster volume itself
  2. sync the db to a kubernetes secret

I've opened heketi/heketi#1333

Yeah it's perfectly valid to request different storage backends for the internal state. It will be some good amount of work to abstract it out though. Glusterd2 will use etcd (it's own instance) to store its state.

@MartinForReal

This comment has been minimized.

Copy link
Contributor

MartinForReal commented Aug 29, 2018

Can you be more specific how this makes it more difficult to achieve HA?

heketi use boltdb to store metadata (such as cluster info, node info, device list and bricks list). And boltdb can't be opened by two or more process at the same time. So we can't use share db pattern ( which is not a good way to achieve ha).

@MartinForReal

This comment has been minimized.

Copy link
Contributor

MartinForReal commented Sep 29, 2018

I think it is reasonable to wait until glusterd2 is stable and released.

@stale

This comment has been minimized.

Copy link

stale bot commented Dec 28, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Dec 28, 2018

@stale

This comment has been minimized.

Copy link

stale bot commented Jan 4, 2019

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@stale stale bot closed this Jan 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment