Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy: run pika && pika-operator on MiniKube environment #1330

Merged
merged 1 commit into from
Mar 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions operator/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -253,3 +253,5 @@ catalog-build: opm ## Build a catalog image.
.PHONY: catalog-push
catalog-push: ## Push a catalog image.
$(MAKE) docker-push IMG=$(CATALOG_IMG)

include MiniKube.mk
49 changes: 49 additions & 0 deletions operator/MiniKube.mk
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
##@ MiniKube

PIKA_IMAGE ?= pikadb/pika:v3.4.0
PIKA_OPERATOR_IMAGE ?= pika-operator:dev

LOCAL_CLUSTER_NAME ?= mini-pika
LOCAL_CLUSTER_VERSION ?= v1.25.3

.PHONY: minikube-up
minikube-up: ## Start minikube.
@minikube version || (echo "minikube is not installed" && exit 1)
minikube start --kubernetes-version $(LOCAL_CLUSTER_VERSION)

.PHONY: minikube-reset
minikube-reset: ## Reset minikube.
minikube delete

.PHONY: set-local-env
set-local-env: ## Set local env.
export IMG=$(PIKA_OPERATOR_IMAGE)

.PHONY: minikube-image-load
minikube-image-load: ## Load image to minikube.
ifeq ($(shell docker images -q $(PIKA_IMAGE) 2> /dev/null), "")
docker pull $(PIKA_IMAGE)
endif
docker tag $(PIKA_IMAGE) pika:dev
minikube image load pika:dev
minikube image load pika-operator:dev

.PHONY: deploy-pika-sample
deploy-pika-sample: ## Deploy pika-sample.
kubectl apply -f examples/pika-minikube/
sleep 10
kubectl wait pods -l app=pika-minikube --for condition=Ready --timeout=90s
kubectl run pika-minikube-test --image redis -it --rm --restart=Never \
-- /usr/local/bin/redis-cli -h pika-minikube -p 9221 info | grep -E '^pika_'

.PHONY: uninstall-pika-sample
uninstall-pika-sample: ## Uninstall pika-sample.
kubectl delete -f examples/pika-minikube/

##@ Local Deploy
.PHONY: local-deploy
local-deploy: set-local-env docker-build minikube-image-load install deploy deploy-pika-sample

##@ Local Clean
.PHONY: local-clean
local-clean: uninstall-pika-sample uninstall
32 changes: 30 additions & 2 deletions operator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,39 @@ It is responsible for creating and managing the following resources:

## Getting Started

You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for
testing, or run against a remote cluster.
You’ll need a Kubernetes cluster to run against. You can use [MiniKube](https://minikube.sigs.k8s.io)
or [KIND](https://kind.sigs.k8s.io) to get a local cluster for testing, or run against a remote cluster.

**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever
cluster `kubectl cluster-info` shows).

### Running locally with MiniKube

1. Install [MiniKube](https://minikube.sigs.k8s.io/docs/start/)

2. Start a local cluster:

```sh
make minikube-up # run this if you don't have a minikube cluster
make local-deploy
```

Or if you want to use a development pika image:

```sh
make local-deploy PIKA_IMAGE=<your-pika-image>
```

If you see some message like the following, it means that the pika-operator is running successfully:

```sh
************ TEST PIKA ************
kubectl run pika-minikub-test ...
pika_version:3.4.0
pika_git_sha:bd30511bf82038c2c6531b3d84872c9825fe836a
pika_build_compile_date: Dec 1 2020
````

### Running on the cluster

1. Install Instances of Custom Resources:
Expand Down
1 change: 1 addition & 0 deletions operator/config/default/manager_auth_proxy_patch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ spec:
drop:
- "ALL"
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0
imagePullPolicy: IfNotPresent
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
Expand Down
15 changes: 15 additions & 0 deletions operator/config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,10 @@ rules:
- events
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- ""
resources:
Expand All @@ -32,6 +35,18 @@ rules:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- pika.openatom.org
resources:
Expand Down
3 changes: 2 additions & 1 deletion operator/controllers/pika_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ type PikaReconciler struct {
//+kubebuilder:rbac:groups=pika.openatom.org,resources=pikas,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=pika.openatom.org,resources=pikas/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=pika.openatom.org,resources=pikas/finalizers,verbs=update
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
//+kubebuilder:rbac:groups=core,resources=events,verbs=get;list;watch;create;patch
//+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps,resources=statefulsets,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch

Expand Down
166 changes: 166 additions & 0 deletions operator/examples/pika-minikube/pika-cm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: pika-minikube-config
namespace: default
data:
pika.conf: |-
# Pika port
port : 9221
# Thread Number
thread-num : 1
# Thread Pool Size
thread-pool-size : 12
# Sync Thread Number
sync-thread-num : 6
# Pika log path
log-path : /data/log/
# Pika db path
db-path : /data/db/
# Pika write-buffer-size
write-buffer-size : 268435456
# size of one block in arena memory allocation.
# If <= 0, a proper value is automatically calculated
# (usually 1/8 of writer-buffer-size, rounded up to a multiple of 4KB)
arena-block-size :
# Pika timeout
timeout : 60
# Requirepass
requirepass :
# Masterauth
masterauth :
# Userpass
userpass :
# User Blacklist
userblacklist :
# if this option is set to 'classic', that means pika support multiple DB, in
# this mode, option databases enable
# if this option is set to 'sharding', that means pika support multiple Table, you
# can specify slot num for each table, in this mode, option default-slot-num enable
# Pika instance mode [classic | sharding]
instance-mode : classic
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases' - 1, limited in [1, 8]
databases : 1
# default slot number each table in sharding mode
default-slot-num : 1024
# replication num defines how many followers in a single raft group, only [0, 1, 2, 3, 4] is valid
replication-num : 0
# consensus level defines how many confirms does leader get, before commit this log to client,
# only [0, ...replicaiton-num] is valid
consensus-level : 0
# Dump Prefix
dump-prefix :
# daemonize [yes | no]
#daemonize : yes
# Dump Path
dump-path : /data/dump/
# Expire-dump-days
dump-expire : 0
# pidfile Path
pidfile : /var/run/pika.pid
# Max Connection
maxclients : 20000
# the per file size of sst to compact, default is 20M
target-file-size-base : 20971520
# Expire-logs-days
expire-logs-days : 7
# Expire-logs-nums
expire-logs-nums : 10
# Root-connection-num
root-connection-num : 2
# Slowlog-write-errorlog
slowlog-write-errorlog : no
# Slowlog-log-slower-than
slowlog-log-slower-than : 10000
# Slowlog-max-len
slowlog-max-len : 128
# Pika db sync path
db-sync-path : /data/dbsync/
# db sync speed(MB) max is set to 1024MB, min is set to 0, and if below 0 or above 1024, the value will be adjust to 1024
db-sync-speed : -1
# The slave priority
slave-priority : 100
# network interface
#network-interface : eth1
# replication
#slaveof : master-ip:master-port

# CronTask, format 1: start-end/ratio, like 02-04/60, pika will check to schedule compaction between 2 to 4 o'clock everyday
# if the freesize/disksize > 60%.
# format 2: week/start-end/ratio, like 3/02-04/60, pika will check to schedule compaction between 2 to 4 o'clock
# every wednesday, if the freesize/disksize > 60%.
# NOTICE: if compact-interval is set, compact-cron will be mask and disable.
#
#compact-cron : 3/02-04/60

# Compact-interval, format: interval/ratio, like 6/60, pika will check to schedule compaction every 6 hours,
# if the freesize/disksize > 60%. NOTICE:compact-interval is prior than compact-cron;
#compact-interval :

# the size of flow control window while sync binlog between master and slave.Default is 9000 and the maximum is 90000.
sync-window-size : 9000
# max value of connection read buffer size: configurable value 67108864(64MB) or 268435456(256MB) or 536870912(512MB)
# default value is 268435456(256MB)
# NOTICE: master and slave should share exactly the same value
max-conn-rbuf-size : 268435456


###################
## Critical Settings
###################
# write_binlog [yes | no]
write-binlog : yes
# binlog file size: default is 100M, limited in [1K, 2G]
# slave binlog file size must be the same with master's
binlog-file-size : 104857600
# Automatically triggers a small compaction according statistics
# Use the cache to store up to 'max-cache-statistic-keys' keys
# if 'max-cache-statistic-keys' set to '0', that means turn off the statistics function
# it also doesn't automatically trigger a small compact feature
max-cache-statistic-keys : 0
# When 'delete' or 'overwrite' a specific multi-data structure key 'small-compaction-threshold' times,
# a small compact is triggered automatically, default is 5000, limited in [1, 100000]
small-compaction-threshold : 5000
# If the total size of all live memtables of all the DBs exceeds
# the limit, a flush will be triggered in the next DB to which the next write
# is issued.
max-write-buffer-size : 10737418240
# The maximum number of write buffers that are built up in memory for one ColumnFamily in DB.
# The default and the minimum number is 2, so that when 1 write buffer
# is being flushed to storage, new writes can continue to the other write buffer.
# If max-write-buffer-num > 3, writing will be slowed down
# if we are writing to the last write buffer allowed.
max-write-buffer-num : 2
# Limit some command response size, like Scan, Keys*
max-client-response-size : 1073741824
# Compression type supported [snappy, zlib, lz4, zstd]
compression : snappy
# max-background-flushes: default is 1, limited in [1, 4]
max-background-flushes : 1
# max-background-compactions: default is 2, limited in [1, 8]
max-background-compactions : 2
# maximum value of Rocksdb cached open file descriptors
max-cache-files : 5000
# max_bytes_for_level_multiplier: default is 10, you can change it to 5
max-bytes-for-level-multiplier : 10
# BlockBasedTable block_size, default 4k
# block-size: 4096
# block LRU cache, default 8M, 0 to disable
# block-cache: 8388608
# num-shard-bits default -1, the number of bits from cache keys to be use as shard id.
# The cache will be sharded into 2^num_shard_bits shards.
# https://github.com/EighteenZi/rocksdb_wiki/blob/master/Block-Cache.md#lru-cache
# num-shard-bits: -1
# whether the block cache is shared among the RocksDB instances, default is per CF
# share-block-cache: no
# whether or not index and filter blocks is stored in block cache
# cache-index-and-filter-blocks: no
# pin_l0_filter_and_index_blocks_in_cache [yes | no]
# When `cache-index-and-filter-blocks` is enabled, `pin_l0_filter_and_index_blocks_in_cache` is suggested to be enabled
# pin_l0_filter_and_index_blocks_in_cache : no
# when set to yes, bloomfilter of the last level will not be built
# optimize-filters-for-hits: no
# https://github.com/facebook/rocksdb/wiki/Leveled-Compaction#levels-target-size
# level-compaction-dynamic-level-bytes: no
9 changes: 9 additions & 0 deletions operator/examples/pika-minikube/pika-pika.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: pika.openatom.org/v1alpha1
kind: Pika
metadata:
name: pika-minikube
spec:
image: pika:dev
pikaExternalConfig: pika-minikube-config
storageType: "pvc"
storageSize: "10Gi"
Loading