Skip to content
master
Switch branches/tags
Code

Latest commit

Scaling down an M3DB cluster is a pretty heavy operation. All nodes in
the cluster must restream data, and it can put unnecessary pressure on a
cluster if accidentally triggered.

This PR adds a flag that allows users to ensure a cluster will not be
scaled down.
40f410b

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
cmd
 
 
 
 
 
 
 
 
 
 
pkg
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

M3 Operator Build status codecov

The M3 Operator helps you set up M3 on Kubernetes. It aims to automate everyday tasks around managing M3, specifically, it aims to automate:

  • Creating clusters
  • Destroying clusters
  • Expanding clusters (adding instances)
  • Shrinking clusters (removing instances)
  • Replacing failed instances

Table of Contents

More Information

Community Meetings

M3 contributors and maintainers have regular meetings. Join our M3 meetup group to receive notifications on upcoming meetings: https://www.meetup.com/M3-Community/.

You can find recordings of past meetups here: https://vimeo.com/user/120001164/folder/2290331.

Office Hours

Members of the M3 team hold office hours on the third Thursday of every month from 11-1pm EST. To join, make sure to sign up for a slot here: https://calendly.com/chronosphere-intro/m3-community-office-hours.

Install

Dependencies

The M3 operator targets Kubernetes 1.11 and 1.12. We aim to target the latest two minor versions supported by GKE but welcome community contributions to support more versions

The M3 operator is intended for creating highly available clusters across distinct failure domains. For this reason it only support Kubernetes clusters with nodes in at least 3 zones, but [support][https://github.com/m3db/m3db-operator/issues/68] for zonal clusters is coming soon.

Usage

The following instructions are a quickstart to get a cluster up and running. This setup is not for production use, as it has no persistent storage. Read the operator documentation for more information on production-grade clusters.

Create an etcd Cluster

M3 stores its cluster placements and runtime metadata in etcd and needs a running cluster to communicate with.

kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.13.0/example/etcd/etcd-basic.yaml

Install the Operator

Using kubectl (installs in the default namespace):

kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.13.0/bundle.yaml

Create an M3 Cluster

The following command creates an M3 cluster with 3 replicas of data across 256 shards that connects to the 3 available etcd endpoints.

kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/master/example/m3db-local.yaml

When running on GKE, the user applying the manifests needs the ability to allow cluster-admin-binding during installation. Use the following ClusterRoleBinding with the user name provided by gcloud:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)

Resize a Cluster

To resize a cluster, specify the new number of instances you want in each zone either by reapplying your manifest or using kubectl edit. The operator safely scales up or scales down the cluster.

Delete a Cluster

kubectl delete m3dbcluster simple-cluster

You also need to remove the etcd data, or wipe the data generated by the operator if you intend to reuse the etcd cluster for another M3 cluster:

kubectl exec etcd-0 -- env ETCDCTL_API=3 etcdctl del --keys-only --prefix ""

Contributing

You can ask questions and give feedback in the following ways:

The M3 operator welcomes pull requests, read the development guide to help you get setup for building and contributing.


This project is released under the Apache License, Version 2.0.