Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposals: add rook #57

Merged
merged 1 commit into from
Jan 29, 2018
Merged

proposals: add rook #57

merged 1 commit into from
Jan 29, 2018

Conversation

bassam
Copy link
Contributor

@bassam bassam commented Nov 1, 2017

As requested during the July 11th 2017 CNCF TOC meeting (minutes), we submit Rook for consideration to be included as a CNCF project.

First presentation to CNCF TOC is here.

@ichekrygin
Copy link

We used rook underneath our Prometheus servers at HBO running on Kubernetes deployed on AWS. Rook made a significant improvement on Prometheus pod restart time, virtually eliminating downtime and metrics scrape gaps. We are looking forward to rook to be in a production ready state.

@carldanley
Copy link

We use Rook for distributing storage across across multiple large, on-prem clusters (25-40 nodes). Rook has saved us a ton of time and completely eliminated our "on-prem" storage problems (which can otherwise be annoying in an on-prem environment - depending on your setup). We've tried various other storage solutions such as NFS, GlusterFS, etc and none of them were anywhere as easy as Rook was to setup (and maintain). We're excited to see Rook be "production-ready"!

@hunter
Copy link

hunter commented Nov 16, 2017

We've spent a lot of time getting Ceph running on Kubernetes, with a number of deployments in production. Despite all the work there is still a number of areas that are not ideal with the integration into a Kubernetes environment.

As a result we've been exploring and contributing to Rook as a replacement. It combines the power of Ceph with an improved user experience and deeper integration into the Kubernetes platform. As Rook continues to mature it is our intention to use it as the default storage platform for our customer environments. It would be a welcome addition to the CNCF.

@Ulexus
Copy link

Ulexus commented Nov 16, 2017

As one of the original authors of the docker containerization of Ceph, I've been excited by the adaptation Rook has brought to the ecosystem. They've been thinking of exactly the right kinds of adaptations to move Ceph into a container- and cloud-focused system.

@josephjacks
Copy link

IMO, Rook is a uniquely exciting storage project in the cloud-native ecosystem that promises to unlock the incredible power of K8s extensibility with excellent implementations of CRDs and custom controller patterns, enabling universal workload support (running stateful apps fully on K8s, reliably, is still very hard) and perhaps most critically carving the path toward completely decoupling applications (including persistence tiers) away from vendor-specific APIs (cloud+metal) and into the cloud-native environment. I'm excited to see the project and community continue to grow, mature and thrive -- hopefully, within the CNCF!

@debianmaster
Copy link

debianmaster commented Nov 16, 2017

👍 non binding

@kongslund
Copy link

This cloud-native storage project has a lot of potential and is already showing its usefulness. We are currently using it as part of a Kubernetes based platform that sometime during the first half of 2018 will provide 1 PB of replicated block storage to its users.

I really like how the project embraces the Kubernetes philosophy through the use of custom resource definitions handled by custom controllers/operators, and placement through the use of tolerations, node labels and node (anti-)affinities.

@pstadler
Copy link

Utilizing hyper-converged systems with storage tightly coupled to computational resources reduces cost and operational complexity of infrastructure. This is especially true for small scale cluster deployments. One of this the biggest challenges with Kubernetes on bare metal is to provide distributed block storage. Although proprietary solutions exist, there's been a lack of well-backed, easy-to-use open source solutions. Rook has the potential to fill this void.

I'm currently testing Rook on a small scale cluster and will soon begin to rewrite the storage section of the Hobby Kube project to endorse Rook over the proprietary Portworx solution.

@recollir
Copy link

+1. Rook really helps to get the "cloud-native" thinking for storage into on-prem installations of k8s. It removes the complexity of setting up and maintaining a storage cluster yourself. All the operational knowledge needed is nicely packaged into Rook.

@rimusz
Copy link

rimusz commented Nov 16, 2017

+1 Rook is really good option for in-cluster data storage setup, whatever your app needs, block, file and object storages, it has.

@luxas
Copy link

luxas commented Nov 16, 2017

+1, excited to see Rook grow and mature from early on. I've used Rook for quite some time already and it has proven to be a great project; and helped a little with design decisions along the way. What makes Rook great is the clever usage of CRDs, and the facts that it's not built into Kubernetes (which quickly gets painful as we've seen with all the in-tree cloud-providers and volume plugins) and that it's OSS.
Instead of going the build-into-Kubernetes-way, Rook chose to participate in the development of CSI and the predecessor Flexvolumes and actually making Kubernetes better.

Finally, I love that this project focuses on ease of use and especially that it enables running stateful workloads easily on bare metal (where there are no possibilities to use a cloud provider's PV services).

@stevesloka
Copy link

+1, The ability to spin up stable storage in a cloud or on-prem environment had been a fantastic help!

@dimm0
Copy link

dimm0 commented Nov 16, 2017

Rook already helped us to provide fast storage to users in our multi-institutional kubernetes installation for running large machine learning and other modeling jobs, as well as keeping data for kubernetes monitoring pods and doing regular data storage for users. We love the features it provides and great support for solving issues along the way. We're looking forward for it to become production-ready and use it in our new projects. (Nautilus project, UCSD)

@koensayr
Copy link

I think Rook is a really interesting idea, but I'm wondering why something like this hasn't been run CNCF storage working group? There is a lot going on there and I think its worth presenting this there, before we bring this to the TOC. Some of the storage landscape stuff going on there probably needs to be ratified first before we start talking about storage related projects.

@bassam
Copy link
Contributor Author

bassam commented Nov 16, 2017

Hi @koensayr, thanks for taking a look. Rook was presented to the storage working group on July 18th this year. Meeting minutes and video recording can be found here: https://docs.google.com/document/d/1DigEag4UUpf53qYBEr50YIdVJJvhXhxHN5ATj-js-IA.

@koensayr
Copy link

I recall. But I don't recall what that there was a consensus to move this forward to the TOC? I don't think there was an invitation/agreement etc at that time to move forward and there were some various questions I think the community was left with. I have to imagine things have changed since July, and I think it would be great if another presentation was made with whats new/changed etc.

@bassam
Copy link
Contributor Author

bassam commented Nov 17, 2017

@koensayr updated the original comment in the PR with more details. The TOC invited Rook to do a written proposal (after a show of hands) during the July 11th meeting (see minutes link above).

@wattsteve
Copy link

While I can appreciate the enthusiasm in the comments, I think the next step here, per the CNCF SWG intended goals, would be to have the submission reviewed and discussed by the CNCF SWG.

@gourao
Copy link

gourao commented Nov 17, 2017

Congrats to the rook team on this PR. But while rook is a convenient way to deploy ceph, what we are working on at the Storage WG w.r.t cloud native storage is something that can facilitate many different cloud native storage solutions. As a WG we need to first discuss what problems such an orchestrator would solve, and how it would complement the efforts of CSI. I think that this work should be the focus before the CNCF endorses a single approach. Remember, CSI isn’t even in k8s yet. I think we should focus on that work first, then look at including individual storage products in CNCF.

Additionally rook itself is currently tied to ceph, which as a storage implementation, may not be suitable for many cloud native workloads. These were some of the topics that were discussed at the storage WG, and at the face to face meetings. We plan on discussing this again in the next WG. Myself and others on the WG think the current focus should be more on clarifying what cloud native actually means, and then figuring out the next set of problems to solve from there (like we did with CSI). Looking forward to continuing this discussion at the WG.

@ferrantim
Copy link

@pstadler what do you mean by hyperconverged?

Utilizing hyper-converged systems with storage tightly coupled to computational resources reduces cost and operational complexity of infrastructure.

Ceph distributes blocks across the cluster [1] so if you are using Ceph for MySQL for exapmle, your MySQL pod's data volume will never fully be on the same host of your pod. This is how Ceph gets HA, by distributing a volume across multiple hosts which imposes a network latency penalty you don't get with hyperconverged systems. That doesn't mean it is necessarily bad for your use case, but it does mean it is not hyperconverged.

[1] http://ceph.com/ceph-storage/file-system/ . In particular "The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster."

@monadic
Copy link
Contributor

monadic commented Nov 18, 2017

swg approval is not a requirement for project submission. The toc has sole authority for project acceptance. It has delegated investigation of certain strategic matters to the swg which is doing a fine job! I'd like to take the temperature on all things storage at kubecon since the toc will have a f2f there as well as the swg.

@gourao
Copy link

gourao commented Nov 18, 2017

@monadic agreed on what you said about the TOC having sole authority. However I'd like to think that the SWG's input is appropriately considered. What was brought up at one of the SWG can be re-hashed at the f2f. With rook, there are a few concerns. One being that it is built around ceph, and Redhat has not made ceph part of CNCF. I can't wrap my head around how this would work then (logistically speaking). Secondly, there are concerns if ceph really is a good fit for a lot of cloud native stateful apps/dbs.

Honestly I'd rather be more supportive of something like the minio project being proposed first.

@monadic
Copy link
Contributor

monadic commented Nov 18, 2017

We are aware of the ceph issue which was raised in the initial presentation. With storage there will necessarily be cncf projects that interface with non cncf projects so I don't think that can ipso facto block us here. That said, rook had talked about interfacing with other systems and that would be a pre requisite for future widespread adoption imho.

Moreover if ceph is not in cncf and if there is a strong demand within cncf for alternatives to ceph, that door is open. Which also should not block rook.

Overall, it would be good to get a feel for where things stand in Austin. Swg needs to focus on deliverables and readout. Toc will not hold back from interference or preemption if swg is slow.

Finally, we are definitely listening carefully to expert opinions arising via swg and via toc contributor model.

@desdrury
Copy link

+1 for this!

I spent over a year trying to get Ceph defined and running as a stable workload in the open source project, Open Datacentre. Although I succeeded in getting it running it was never entirely stable. And the work required to set it up and maintain it was a real burden. When Rook came along I could immediately see the potential and it has proven to be a simple, yet powerful, capability that means for me the usage of Ceph is now solved!

@jpds
Copy link

jpds commented Nov 23, 2017

I've been using Rook in conjunction with my Kubernetes clusters on bare-metal for a number of months now and I have found it to by far the easiest way to deploy a reliable Ceph cluster for my containers' persistent storage needs.

The team have also been very pro-active to work on bugs that I've reported. I heartly endorse Rook for its inclusion in CNCF!

@wattsteve
Copy link

wattsteve commented Nov 27, 2017

@monadic I agree with what you said. I just want to add some extra context in that earlier in the year, the SWG met with @benh and established a charter for the SWG, where, among other things, we'd help him review inbound storage related CNCF project submissions. As such, we are going to put the Rook proposal on the SWG agenda so we can provide you with an SWG perspective to consider as the TOC reviews the submission. As with anything with volunteers, timelines are helpful for prioritization and mobilization. Does the TOC need the feedback by a particular date?


Rook is currently in alpha state and has focused initially on orchestrating Ceph on-top of Kubernetes. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook is planning to be production ready by Dec'17 for block storage deployments on-top of Kubernetes.

With community participation, Rook plans to add support for other storage systems beyond Ceph and other cloud native environments beyond Kubernetes. The logic for orchestrating storage systems can be reused across storage backends. Also having common abstractions, packaging, and integrations reduces the burden of introducing storage back-ends and improves the overall experience.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the Rook blog said work on Gluster support has started. Is that correct?

I find the APIs somewhat low-level and they expose some aspects of the storage system topology (e.g., distinguishing metadata from data), but is there anything actually Ceph-specific?

How much of the implementation (rough %age) is Ceph-specific?

Is there an internal plugin-like API/abstraction between Rook and the storage system for the operations the storage system must perform?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bgrant0607 we started an early design doc for supporting Gluster. With community support I think it could happen and new work happening in Gluster D2 could simplify the effort. There has also been early community interest in supporting other storage backends beyond ceph and gluster.

The APIs are low level and should carry forward to other storage system. Our thought is to have common APIs (like Cluster and Pool) with common and backend-specific sections (like StorageClass) as well as back-end specific APIs. Some APIs could eventually become first class in Kubernetes (like VolumeAttachement).

Roughly I'd say 40% is ceph specific right now.

There is no internal plugin-like api. We've experimented with that but found it to be too restricting. Instead the common abstractions are the high-level APIs/CRDs, and there is common libraries/pkgs that would be shared by the different backend implementations.

@monadic
Copy link
Contributor

monadic commented Nov 28, 2017

@wattsteve I would like to get a readout from swg at kubecon. Interested in areas which are settled, and understanding gaps that the toc can take a view on

Not in a crazy rush with rook until understand people's thoughts next week. Then will discuss with toc.

@glerchundi
Copy link

glerchundi commented Nov 28, 2017 via email

@skamille
Copy link
Contributor

skamille commented Nov 29, 2017 via email

@bassam
Copy link
Contributor Author

bassam commented Dec 1, 2017

minor: updated the community stats and sponsorship in the proposal (and rebased)

@lpabon
Copy link

lpabon commented Dec 5, 2017

Sorry I am late to this.

Let me start out saying that I have been following the Rook project for more than a year. I really like the project and want it to succeed.

With that in mind, I believe that Rook should not be part of CNCF until it has both a larger community than one or two companies, and that it supports more than one or two storage systems.

I believe that the project should be successful as an open community project before participating as a CNCF project. If not, we could create some confusion to both other projects that want to be part of the CNCF, and to users of what it means to be part of the CNCF.

I do want to reiterate that I do want Rook.io be a great open community project supporting multiple storage systems, and I think then it will be the appropriate time to be part of the CNCF.

@hunter
Copy link

hunter commented Dec 6, 2017

Glad you joined the discussion @lpabon. Having worked on Quartermaster, do you see that there is any area for learning/collaboration on multiple storage systems for Rook?

@gourao
Copy link

gourao commented Dec 6, 2017

Thanks everyone who spoke at the SWG meeting at Kubecon yesterday about this. We had a good healthy debate and believe that everyone is acting in the best interests of the k8s community. I want to summarize my argument here for why I don't think rook is ready today to be a CNCF project. I just want to emphasize that I am not saying it will never be ready, just not ready today. Bassam has made an argument that Rook is about more than Ceph and I believe him that he would like it to be about more than Ceph. But the reality today is that Rook is all about Ceph and only adds value to Ceph users. Having an easy way to operate Ceph for k8s is awesome and I am very supportive of that! The problem is that if rook today is added to CNCF, then the impact of that is a CNCF endorsement of Ceph for Kubernetes applications. It is king making Ceph. I strongly advocate that if Rook is truly about more than Ceph, then it adds a Gluster or some other backend, gets some community support from the Ceph and Gluster communities then re-applies. Bassam has made the argument that this is a chicken and egg problem. That is, if rook is not in CNCF then he doesn't have the resources to add Gluster. But I don't think that is the way to look at it. If rook truly is valuable to more than Ceph, then the Gluster community should WANT to add Rook support. Nothing is stopping them today. The reason they haven't, I believe, is that Rook is really valuable for Ceph, not other backends. CNCF should not be in the business of king making particular storage solutions and arguments that rook is about more than Ceph aren't compelling given the reality of how users are using it. Rook is a great project for the ceph community. Not for the CNCF community. I hope that makes sense.

@skamille
Copy link
Contributor

skamille commented Dec 6, 2017 via email

@jessfraz
Copy link

jessfraz commented Dec 6, 2017

I agree with @skamille.

@gourao on your comment:

The problem is that if rook today is added to CNCF, then the impact of that is a CNCF endorsement of Ceph for Kubernetes applications. It is king making Ceph.

AFAIK rook already has adoption unlike some of the other proposals here.... so that makes a stronger case imho. It also seems like leniency has already been accounted to some projects and not others when it comes to technical design and actual adoption.

And it seems like the problem you personally have with it, is that you are selling a different storage option.

Rook can easily support other filesystems but the truth is there is no other least bad option than ceph and if we sit around waiting for some new filesystem to solve all our problems we will be waiting for eternity for it to get to place where people on old kernels can even use it. So I really don't see a point in the whole "lets tear down ceph" thing when really it is what most people are using today.

@recollir
Copy link

recollir commented Dec 6, 2017

We have several container runtimes in CNCF, we have several services meshes in CNCF. Getting rook (with ceph) now into CNCF doesn’t make it king, only first. Other projects (with other file systems or even with ceph) can still come up and apply. I really think that having now a cloud independent and open source storage solution for k8s and CNCF is a benefit for the community. now

@desdrury
Copy link

desdrury commented Dec 6, 2017

Rook is a mature and stable project that continues to be developed in a professional manner. It solves a glaring omission in bare-metal, and even cloud based, delivery of Kubernetes. Namely, a simple and reliable way to integrate storage so that stateful workloads can be run. Ceph is battle tested over many years and on many large scale installations. It is the perfect choice as the first storage backend. And I trust the project maintainers when they say they want to add additional backends. I think CNCF would greatly benefit in filling out its portfolio of projects by adopting Rook.

@bgrant0607
Copy link
Contributor

@lpabon @gourao

CNCF aspires to foster high-quality, high-velocity open-source projects that have value to users trying to operate applications in a cloud-native fashion. As for whether a project is ready for CNCF is up to the project and to the TOC. The TOC is actively seeking earlier-stage inception-level projects.

Please see:
https://github.com/cncf/toc/blob/master/PRINCIPLES.md
https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc
https://github.com/cncf/toc/blob/master/process/due-diligence-guidelines.md

I share the concerns of @skamille that the discussions in this area are not conforming to community norms and are not aligned with the CNCF mission.

The TOC's process is:

  1. A project that could potentially be a fit for CNCF is identified
  2. Project is invited to present during a regular TOC meeting
  3. The TOC discusses the project
  4. The project may at some point be invited to submit a proposal, in which case it needs a sponsor on the TOC
  5. A PR is created for the proposal
  6. Due diligence occurs
  7. Once TOC members feel they have enough information to make a decision, a vote is called on TOC mailing list

The Storage WG can potentially help with step 1, identifying promising projects, and step 6, performing and summarizing diligence for the TOC.

As we perform diligence on the project, let's please stick to whether it's aligned with the CNCF mission, potentially has value to users, exhibits cloud-native patterns that we want to promote, meets CNCF technical standards for the level of maturity of the project, and matches CNCF governance expectations.

@lpabon
Copy link

lpabon commented Dec 7, 2017

@hunter Hi! Yeah, Quartermaster (which development was stopped due to company realignment) was very much aligned with the same goal as Rook. Luckily Rook is continuing the same idea as Quartermaster. The difference there was that Quartermaster had support NFS, GlusterFS, and OpenStack Swift. This is what I am suggesting to Rook, so that these communities can help Rook get better and become their champions

@bgrant0607

As for whether a project is ready for CNCF is up to the project and to the TOC.

This is news to me.

The concern I have is that I am here at Kubecon, and see the CNCF project images highlighted throughout the conference. I would have preferred that the CNCF highlight instead the areas of interest and how these stacks solve problems for users. My concern with the project images throughout the conference is that, as a customer/user, it would seem that non-CNCF projects should not be used or even tried. And that is my concern with using extremely early projects whose goals have not been committed yet since it supports a single storage system.

But, I appreciate at least giving me/us a forum to provide my/our opinion.

@dankohn
Copy link
Contributor

dankohn commented Dec 7, 2017

@lpabon Please note this text at the bottom of Cloud Native Landscape:

This landscape is intended as a map through the previously uncharted terrain of cloud native technologies. There are many routes to deploying a cloud native application, with CNCF Projects representing a particularly well-traveled path.

Portworx is shown on the landscape in the same box as CSI and Rook.

@lpabon
Copy link

lpabon commented Dec 7, 2017

@dankohn Nice! When are we going to make that look like Google Earth :-). That would be so cool.

@tnachen
Copy link
Contributor

tnachen commented Dec 9, 2017

Is there an architectural design overview available somewhere for Rook? I couldn't really find components or diagram in github or on the website.
Also it's not quite clear to me what the abstraction of the backend is, and how it is able to be decoupled by what Ceph offers and potentially what other storage backend doesn't, as lots of the documentation around features and design evolves around Ceph. Is Rook's design only try to find storage backends that offers similar high level features (block, object, erasure coding, etc) or is planned to offer extensible capabilities?

@bassam
Copy link
Contributor Author

bassam commented Dec 10, 2017

@tnachen thanks for taking a look!

Is there an architectural design overview available somewhere for Rook? I couldn't really find components or diagram in github or on the website.

Take a look here and here.

Also it's not quite clear to me what the abstraction of the backend is

There is no internal abstraction of the backend. This was an explicit design choice. The Kubernetes CRDs are the only abstraction and they define things like storage cluster, pools, etc. Some of the specs like storage cluster, storage node, node selection, etc. are shared, and others will be backend specific for example erasure coding settings. We plan on refactoring our existing CRDs to clearly delineate between the two.

We have focused primarily on Ceph up to this point and have written a design doc on supporting
Gluster. We would hope that there is enough end-user value right now to proceed with Rook at the inception stage. Before the project goes to incubation stage we plan to work with community members to have multiple backends supported.

Is Rook's design only try to find storage backends that offers similar high level features (block, object, erasure coding, etc) or is planned to offer extensible capabilities?

Rook is scoped to File, Block and Object storage providers -- i.e. low level storage with common interfaces.

@epowell101
Copy link

Just a couple of quick thoughts as a leader of OpenEBS - an open source storage project written from the ground up to be "cloud native" and w/ equal to or greater open source traction than Rook - I'm perhaps not surprisingly not in favor of Rook gaining the support of CNCF at this time.

I would suggest we ask ourselves - what IF storage and data services could be delivered using cloud native principals, w/ the storage controllers themselves being containers? What if Kubernetes were not just something we provided storage TO - for example by attaching CEPH or any other scale out storage system to it - but also something we used to provide storage?

What we are finding is that this pattern ^^ - similar to that of StorageOS and Portworx and also similar to direct attached - delivers many real benefits that putting all the data from all the workloads in one storage system does not deliver. I won't list these however think about what it means to map one to one a workload and its app team to a storage controller as opposed to 100+ to one storage system. Conversely, having a resilient containerized cloud-native environment tied to a famously difficult single storage system like CEPH feels like an "anti pattern".

As an aside, solutions like OpenEBS can run on top of solutions like CEPH. So it is not entirely one or the other.

We've been asked to present to the TOC or to someone as I understand it and that might be a good path though while we have as much or more traction at least on GitHub than Rook I would question whether the CNCF would want to give OpenEBS that stamp of approval either.

I feel like it might be worth backing up a level and asking why shared storage is an anti pattern for more and more workloads - for much of NoSql for example. Doing so could require some storage geekery around NVMe, kernel interrupts and so forth in order to explain the ever growing tax that most shared scale out storage places on performance versus local storage and hence versus OpenEBS and other similar architectures. Cloud native architectures are not the only disruptive forces arguing against scale out for many use cases.

Sorry we have mainly been lurking. We are getting move visibly involved and welcome the opportunity.

@liewegas
Copy link

We (both Ceph and Red Hat) are very excited about Rook and would be happy to see it join the CNCF. Controlling Ceph service containers running in Kubernetes is complicated, and after several go-arounds with simpler options (e.g., ceph-helm) we came to the conclusion that we wanted a custom operator to do it properly. Rook fits that bill nicely by both providing a controller for the various Ceph containers and also nice set of interfaces to orchestrate clusters and storage services in a Kubernetes-friendly way. The user interest Rook has seen speaks volumes: it is solving a challenging problem that Ceph doesn't do on its own.

We expect Rook to be the preferred way to run Ceph on Kubernetes. We have a list of improvements we'd like to make to Rook, including allowing it to consume standard upstream Ceph containers (and eventually downstream containers for the likes of Red Hat Ceph Storage), standardizing on upstream tools for device/OSD management, adding support for OpenShift, and improving object storage orchestration (radosgw). We're also starting work on a new CSI driver for Ceph that Rook will eventually be able to consume in place of its current flexvolume driver.

I don't really understand the objections to Rook joining CNCF. Rook can be extended to orchestrate other open source SDS systems other than Ceph, or other projects can choose to use other tools (and contribute them to the CNCF if they so choose). Incubating one SDS orchestrator in CNCF doesn't preclude doing the same for others in the future.

@countspongebob
Copy link

Non binding +1 and two 👍 👍

@tnachen
Copy link
Contributor

tnachen commented Jan 19, 2018 via email

@jbguerraz
Copy link

Non binding +1 :)

@j-griffith
Copy link

Meh, as I mentioned in Austin I'm not sure how it differs from the myriad of other things out there:
Cinder, RexRay, Trident, Open SDS etc etc etc. Additionally a number of items in that list have much broader device support. All the comments about this being "real cloud native storage" in this PR are somewhat strange to me. It's a management/deployment abstraction and not a storage solution at all so I'm not following that line of thinking at all.

Regardless, sounds like the TOC is the only opinion that matters here. I do think at some point that the K8s storage community needs to figure out what it wants to be when it grows up and actually focus on a common goal rather than continue to thrash around back and forth. Of course that assumes we want to have a "storage community" within K8's, that may not be a desire.

Anyway, I'm indifferent at this point.

+0

@wattsteve
Copy link

wattsteve commented Jan 19, 2018

+1 for Inception Acceptance. Non-Binding.

I really don't think there is any reason why Rook would not qualify for the inception acceptance criteria. However, given that the Rook project's goals are to be a pluggable operator framework for software defined storage, I would expect to see the project get adopted by more than one storage platform in order to validate the project is actually meeting its goals. However, I gather that giving projects time to mature and gain adoption before graduating to Incubation or TLP is what the Inception phase is all about. So with that said, I'll be reserving further scrutiny until the next step in the graduation process.

@adersberger
Copy link
Contributor

+1 (non-binding)

@patrickstjohn
Copy link

Rook was the easiest to deploy and most robust solution for our bare-metal K8s cluster that we have found. We've been getting ready for our production rollout for several months and have found it to be the best and least license restricting option for our use cases. Would be wonderful to have a CNCF backed storage solution for K8s that would work for bare-metal deployments.

@bgrant0607
Copy link
Contributor

Let's please keep our conversations respectful.

https://github.com/cncf/foundation/blob/master/code-of-conduct.md

@caniszczyk
Copy link
Contributor

Hey everyone, I'm happy to announce that rook has been accepted into CNCF as an INCEPTION level project (sponsored by Ben Hindman): #57

+1 TOC binding votes (6 / 9):

+1 non-binding community votes:

@caniszczyk caniszczyk merged commit 5bb1c36 into cncf:master Jan 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.