Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes upstream: add rook operator #348

Merged
merged 1 commit into from May 16, 2019

Conversation

6 participants
@leseb
Copy link
Contributor

commented May 13, 2019

Create a new operator under community-operators for Rook the Storage
Orchestrator for Kubernetes. This is the upstream version.

The initial implementation expects the Ceph cluster to be part of the
same namespace as the operator.
It also has all Rook's capabilities when it comes to creating, managing
and upgrading a cluster. Simply edit the cluster CR to apply any changes
to your deployment.

Signed-off-by: Sébastien Han seb@redhat.com

@leseb leseb changed the title upstream: add rook operator kubernetes upstream: add rook operator May 13, 2019

@leseb leseb force-pushed the leseb:rook-upstream branch 2 times, most recently from a391d7f to c9b18da May 13, 2019

@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 13, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

1 similar comment
@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 13, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

@leseb leseb force-pushed the leseb:rook-upstream branch from c9b18da to 8ff7aed May 13, 2019

@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 13, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

@leseb

This comment has been minimized.

Copy link
Contributor Author

commented May 13, 2019

Current Total Score: 32%...

@leseb

This comment has been minimized.

Copy link
Contributor Author

commented May 13, 2019

The logs are full of errors, yet the CI is green https://travis-ci.com/operator-framework/community-operators/builds/111611074 🤔

@SamiSousa

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@leseb The scorecard passes since your operator is running. Our CI also prints out the operator logs as well, which is where you are seeing the errors like this: https://travis-ci.com/operator-framework/community-operators/builds/111611074#L419

@leseb leseb force-pushed the leseb:rook-upstream branch from 8ff7aed to e05824c May 13, 2019

@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 13, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

@leseb

This comment has been minimized.

Copy link
Contributor Author

commented May 13, 2019

@SamiSousa looks like the issues are solved, I don't understand "CVP/pr-sanity-check"'s error here, am I missing something obvious? Thanks!

@dmesser

This comment has been minimized.

Copy link
Contributor

commented May 14, 2019

Looks good. Can you elaborate in the description a little more what the Operator does? E.g. that it stands up a Ceph cluster with StatefulSets and whether or not it supports upgrades of Ceph as well as advanced lifecycle features as claimed in the capability level ("Full Lifecycle").
Also, any requirements on the Kubernetes node (devices, network) would be good to know.

@leseb leseb force-pushed the leseb:rook-upstream branch from e05824c to cb67e93 May 14, 2019

@leseb

This comment has been minimized.

Copy link
Contributor Author

commented May 14, 2019

Note: I successfully deployed a cluster using the same Travis job (injected a minimal CR).

@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 14, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

@leseb leseb force-pushed the leseb:rook-upstream branch from cb67e93 to d45ec73 May 14, 2019

@cvp-ops

This comment has been minimized.

Copy link
Collaborator

commented May 14, 2019

PR includes changes to only the [upstream-community-operators] directory. No pipeline will be launched.

@leseb leseb force-pushed the leseb:rook-upstream branch 7 times, most recently from 186ece7 to ff20180 May 15, 2019

@leseb leseb force-pushed the leseb:rook-upstream branch from ff20180 to acd2b8c May 15, 2019

upstream: add rook operator
Create a new operator under community-operators for Rook the Storage
Orchestrator for Kubernetes. This is the Kubernetes upstream version.

The initial implementation expects the Ceph cluster to be part of the
same namespace as the operator.
It also has all Rook's capabilities when it comes to creating, managing
and upgrading a cluster. Simply edit the cluster CR to apply any changes
to your deployment.

Signed-off-by: Sébastien Han <seb@redhat.com>

@leseb leseb force-pushed the leseb:rook-upstream branch from acd2b8c to 339ec84 May 16, 2019

@dmesser

This comment has been minimized.

Copy link
Contributor

commented May 16, 2019

/lgtm

@dmesser dmesser merged commit 99c0144 into operator-framework:master May 16, 2019

2 checks passed

Travis CI - Pull Request Build Passed
Details
ci/prow/verify Job succeeded.
Details

mmgaggle added a commit to mmgaggle/community-operators that referenced this pull request Jun 13, 2019

Add rook-ceph upstream operator to okd/openshift
The rook-ceph operator already existed in the catalog for kubernetes by way of
PR operator-framework#348. This adds those same files to the catalog for okd/openshift. There is
precedent for both upstream and downstream operators:

* Strimzi / AMQ Streams
* Infinispan / JBOSS Data Grid

@mmgaggle mmgaggle referenced this pull request Jun 13, 2019

Draft

Add rook-ceph upstream operator to okd/openshift #436

0 of 19 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.