A management framework for extending Kubernetes with Operators
Branch: master
Clone or download
openshift-merge-robot Merge pull request #709 from njhale/grpc-address
feat(catalogsource): allow grpc source types that don't require an image
Latest commit e40d4f4 Feb 15, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitlab-ci chore(ci): remove ci checks for PRs Jan 9, 2019
Documentation fix(olm): Remove the "v" prefix in minKubeVersion if presents Feb 8, 2019
cmd feat(catalogsource): allow grpc source types that don't require an image Feb 15, 2019
deploy feat(catalogsource): allow grpc source types that don't require an image Feb 15, 2019
manifests feat(catalogsource): allow grpc source types that don't require an image Feb 15, 2019
pkg Merge pull request #709 from njhale/grpc-address Feb 15, 2019
scripts fix(e2e): make e2e-local-docker work again Jan 18, 2019
test feat(catalogsource): allow grpc source types that don't require an image Feb 15, 2019
vendor fix(registry-pods): add everything toleration to registry pods Feb 15, 2019
.dockerignore feat(container): make container builds faster Nov 7, 2018
.gitignore chore(deps): vendor Dec 18, 2018
.gitlab-ci.jsonnet chore(ci): remove ci checks for PRs Jan 9, 2019
.gitlab-ci.yml chore(ci): remove ci checks for PRs Jan 9, 2019
DEVELOPMENT.md feat(makefile): allow running a single e2e test from the make command Aug 2, 2018
Dockerfile feat(container): make container builds faster Nov 7, 2018
LICENSE add Apache 2.0 License Mar 12, 2018
Makefile fix(csv): only allow one CSV per provided API across intersecting Feb 12, 2019
OLM_VERSION cut 0.8.1 Jan 12, 2019
OWNERS Create OWNERS Sep 17, 2018
README.md Update README.md Jan 31, 2019
base.Dockerfile chore(build): remove vendor commands from base dockerfile Nov 13, 2018
bill-of-materials.json add bill of materials Apr 12, 2018
boilerplate.go.txt chore(codegen): run all codegen in a container Nov 7, 2018
code-of-conduct.md *: add code of conduct 🎉 Apr 27, 2018
codegen.Dockerfile feat(mocks): generate fakes and mocks in a container Nov 12, 2018
e2e-local-run.Dockerfile chore(deps): switch to go modules Nov 7, 2018
e2e-local-shift.Dockerfile chore(deps): switch to go modules Nov 7, 2018
e2e.Dockerfile fix(e2e): make e2e-local-docker work again Jan 18, 2019
go.mod fix(registry-pods): add everything toleration to registry pods Feb 15, 2019
go.sum feat(catalogsource): allow grpc source types that don't require an image Feb 15, 2019
logo.png fix(ci): update paths in dockerfiles Apr 30, 2018
logo.svg fix(ci): update paths in dockerfiles Apr 30, 2018
mockgen.Dockerfile feat(mocks): generate fakes and mocks in a container Nov 12, 2018
tools.go feat(subscriptions): requeue dependent Subscriptions on CatalogSource Dec 10, 2018
upstream.Dockerfile fix(e2e): switch to port 5443 for owned apiservice test Oct 18, 2018

README.md

Docker Repository on Quay Docker Repository on Quaypipeline status

Operator Lifecycle Manager

This project is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Read more in the introduction blog post.

OLM extends Kubernetes to provide a declarative way to install, manage, and upgrade operators and their dependencies in a cluster.

It also enforces some constraints on the components it manages in order to ensure a good user experience.

This project enables users to do the following:

  • Define applications as a single Kubernetes resource that encapsulates requirements and metadata
  • Install applications automatically with dependency resolution or manually with nothing but kubectl
  • Upgrade applications automatically with different approval policies

This project does not:

  • Replace Helm
  • Turn Kubernetes into a PaaS

Getting Started

Installation

Install OLM on a Kubernetes or OpenShift cluster by following the installation guide.

For a complete end-to-end example of how OLM fits into the Operator Framework, see the Operator Framework Getting Started Guide.

Kubernetes-native Applications

An Operator is an application-specific controller that extends the Kubernetes API to create, configure, manage, and operate instances of complex applications on behalf of a user.

OLM requires that applications be managed by an operator, but that doesn't mean that each application must write one from scratch. Depending on the level of control required you may:

  • Package up an existing set of resources for OLM with helm-app-operator-kit without writing a single line of go.
  • Use the operator-sdk to quickly build an operator from scratch.

Once you have an application packaged for OLM, you can deploy it with OLM by writing a ClusterServiceVersion.

ClusterServiceVersions can be collected into CatalogSources which will allow automated installation and dependency resolution via an InstallPlan, and can be kept up-to-date with a Subscription.

Learn more about the components used by OLM by reading about the architecture and philosophy.

Key Concepts

CustomResourceDefinitions

OLM standardizes interactions with operators by requiring that the interface to an operator be via the Kubernetes API. Because we expect users to define the interfaces to their applications, OLM currently uses CRDs to define the Kubernetes API interactions.

Examples: EtcdCluster CRD, EtcdBackup CRD

Descriptors

OLM introduces the notion of “descriptors” of both spec and status fields in kubernetes API responses. Descriptors are intended to indicate various properties of a field in order to make decisions about their content. For example, this can drive connecting two operators together (e.g. connecting the connection string from a mysql instance to a consuming application) and be used to drive rich interactions in a UI.

See an example of a ClusterServiceVersion with descriptors

Dependency Resolution

To minimize the effort required to run an application on kubernetes, OLM handles dependency discovery and resolution of applications running on OLM.

This is achieved through additional metadata on the application definition. Each operator must define:

  • The CRDs that it is responsible for managing.
    • e.g., the etcd operator manages EtcdCluster.
  • The CRDs that it depends on.
    • e.g., the vault operator depends on EtcdCluster, because Vault is backed by etcd.

Basic dependency resolution is then possible by finding, for each “required” CRD, the corresponding operator that manages it and installing it as well. Dependency resolution can be further constrained by the way a user interacts with catalogs.

Granularity

Dependency resolution is driven through the (Group, Version, Kind) of CRDs. This means that no updates can occur to a given CRD (of a particular Group, Version, Kind) unless they are completely backward compatible.

There is no way to express a dependency on a particular version of an operator (e.g. etcd-operator v0.9.0) or application instance (e.g. etcd v3.2.1). This encourages application authors to depend on the interface and not the implementation.

Discovery, Catalogs, and Automated Upgrades

OLM has the concept of catalogs, which are repositories of application definitions and CRDs.

Catalogs contain a set of Packages, which map “channels” to a particular application definition. Channels allow package authors write different upgrade paths for different users (e.g. alpha vs. stable).

Example: etcd package

Users can subscribe to channels and have their operators automatically updated when new versions are released.

Here's an example of a subscription:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd
  namespace: local 
spec:
  channel: alpha
  name: etcd
  source: rh-operators

This will keep the etcd ClusterServiceVersion up to date as new versions become available in the catalog.

Catalogs are served internally over a grpc interface to OLM from operator-registry pods.

User Interface

Use the OpenShift admin console (compatible with upstream Kubernetes) to interact with and visualize the resources managed by OLM. Create subscriptions, approve install plans, identify Operator-managed resources, and more.

Ensure kubectl is pointing at a cluster and run:

$ ./scripts/run_console_local.sh

Then visit http://localhost:9000 to view the console.

Subscription detail view: screenshot_20180628_165240