Skip to content

Commit

Permalink
OBSDATA-492: Sync with upstream druid-operator (#16)
Browse files Browse the repository at this point in the history
* Restrict K8s role definitions to actually needed roles (druid-io#226)

* PK | druid-io#255 | Restrict K8s role definitions to actually needed roles

* PK | druid-io#255 | Restrict K8s role definitions to actually needed roles - templates/

* PK | druid-io#255 | Add patch to event and newline at the end of the file

* PK | druid-io#255 | Add patch to event to templates

Co-authored-by: Piotr Kmita <piotr.kmita@zalando.de>

* Introduce State Driven Status, use record pkg for event and clean logging (druid-io#223)

* status v1, records

* use druid constructor in main.go

* fix smoke tests

* fix comments

* update delete event, status

* push emitter changes

Co-authored-by: AdheipSingh <adheip.singh@rilldata.com>

* update to go.mod to k8s 1.22 (druid-io#237)

* update to k8s 1.22

* update go.mod

* use 1.19.2 for suit_test

* revert yaml change

* update identation/spaces in tiny

* update identation/spaces in tiny

Co-authored-by: AdheipSingh <adheip.singh@rilldata.com>

* Create watched_namespace.yaml (druid-io#248)

* Create watched_namespace.yaml

Create namespace that is being watched by Druid when declared and not being the default

* Update watched_namespace.yaml

Add some clarifications on comments

* Update watched_namespace.yaml

Fix to create all required namespaces by WATCH_NAMESPACE env var (comma separated list. This is intended to handle this future feature) only when they are not created yet

* Fix Helm Chart type error  (druid-io#253)

* run helm lint github actions

* add helm lint, template in CI

* Fix76: Scale STS / PVC vertically to support size resizing of druid nodes (druid-io#97)

* add storage class expansion check, generation check, refactor

* add flag for scalePvcSts

* add component label, update crd

* fix unit tests

* identation in testdata

* Fixing the image tag to be pulled for druid-operator (druid-io#259)

* the image tag fixed to druidio/druid-operator:latest

* Update Druid Operator Documentation (druid-io#255)

* update docs

* update typos

* update docs

* udpate docs

* Adding support for custom metricDimensions.json for statsd metrics emitter (druid-io#250)

* Adding support for custom metricDimensions.json.

* Addressing Adheip's comments.

* Fixing test failure.

* add patch to pvc rbac (druid-io#261)

* add support for druid 0.22.1 and zookeeper 3.7.0 (druid-io#267)

* add support for druid 0.22.1 and zookeeper 3.7.0

* Update tiny-cluster-zk.yaml

change ZooKeeper version from 3.7 to 3.7.0

* update appVersion to 0.0.8-release (druid-io#238)

Co-authored-by: AdheipSingh <adheip.singh@rilldata.com>

* Fix-214-Merge-Labels (druid-io#273)

* add health checks to operator manage (druid-io#278)

* safe guard pod pending deletion of pvc (druid-io#289)

* make nodeselector consistent (druid-io#284)

* make nodeselector consitent

* update handler

* fix revert, scale down pods replicas to zero (druid-io#283)

* fix typo for startupProbe (druid-io#282)

* Bump up chart version to 0.1.2 (druid-io#271) (druid-io#272)

* Watch Multiple Namespaces (druid-io#240)

* WIP: on druid k8s extension, do, and watch ns

* support for multiple namespaces to watch

* support for multiple namespaces to watch

* fix typo

* fix identation

* Release 0.0.9 - Go Mod and Dockerfile Update (druid-io#291)

* go modules update

* add icon

* update dockerfile

* fix tests

* update helm version

* max reconciles workerqueue (druid-io#299)

* emit detect underlying object type on CRUD (druid-io#300)

* make crd optional (druid-io#307)

* multiple watched namespaces (druid-io#309)

* Sidecar container spec for druid (druid-io#296)

* Sidecar container spec for druid

* Fix PR comments on additionalcontainer

* Add envFrom and add comments

* Add example spec to example.md

* Add multi container deployment to features.md

Co-authored-by: cinto <cinto@apple.com>

* update makefile, tests, controller-gen (druid-io#315)

* update make

* update makefile

* Fix-312-Run Test on CI, Controller Gen Version Update (druid-io#319)

* update fixes

* add more tests

* fix test kubebuilder

* seperate build

* revert build

Co-authored-by: AdheipSingh <adheips1222@gmail.com>

* Adding minikube-setup instructions.

Update tiny-cluster.yaml to make it work in minikube.

Fixing operator startup bug.

Miscellaneous fixes to make local minikube setup work.

Fixing MM readiness probe and steps for minikube ingress-dns issue for local minikube setup.

* semaphore

path

generate

fix format

init-ci

add ci bin

path

kubebuilder move

dummy change

* Add command args to container creation (#6)

* go/codeowners: Generate CODEOWNERS [ci skip] (#7)

* [METRICS-4348] update obs-data team as codeownderswq (#8)

* [METRICS-4487] add obs-oncall as codeowners (#10)

* DP-8085 - Migrate to Sempahore self-hosted agent (#9)

* DP-9370 - Migrate to Semaphore self-hosted agent (#15)

* chore: update repo semaphore project

* Update service.yml (#17)

Update semaphore job commands and go version

update build triggers

update semaphore whitelist

Update project.yml

* update CI 

* nodespec should take precedence (#13)

Co-authored-by: piotrkmita <43473995+piotrkmita@users.noreply.github.com>
Co-authored-by: Piotr Kmita <piotr.kmita@zalando.de>
Co-authored-by: AdheipSingh <34169002+AdheipSingh@users.noreply.github.com>
Co-authored-by: AdheipSingh <adheip.singh@rilldata.com>
Co-authored-by: Alby Hernández <61636487+achetronic@users.noreply.github.com>
Co-authored-by: shrutimantri <shruti1810@gmail.com>
Co-authored-by: Harini Rajendran <harini.rajendran@yahoo.com>
Co-authored-by: RoelofKuijpers <roelof@datanetic.com>
Co-authored-by: Youngwoo Kim <ywkim@apache.org>
Co-authored-by: Vladislav <vladislavPV@users.noreply.github.com>
Co-authored-by: Zhang Lu <91473238+zhangluva@users.noreply.github.com>
Co-authored-by: cintoSunny <67714887+cintoSunny@users.noreply.github.com>
Co-authored-by: cinto <cinto@apple.com>
Co-authored-by: AdheipSingh <adheips1222@gmail.com>
Co-authored-by: Harini Rajendran <hrajendran@confluent.io>
Co-authored-by: Luke Young <91491244+lyoung-confluent@users.noreply.github.com>
Co-authored-by: Yun Fu <fuyun12345@gmail.com>
Co-authored-by: nlou9 <39046184+nlou9@users.noreply.github.com>
Co-authored-by: Corey Christous <cchristous@gmail.com>
Co-authored-by: Confluent Jenkins Bot <jenkins@confluent.io>
  • Loading branch information
21 people committed Jan 6, 2023
1 parent 72998b1 commit 4595a63
Show file tree
Hide file tree
Showing 12 changed files with 402 additions and 49 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
@@ -0,0 +1 @@
* @confluentinc/obs-data @confluentinc/obs-oncall
37 changes: 37 additions & 0 deletions .semaphore/project.yml
@@ -0,0 +1,37 @@
apiVersion: v1alpha
kind: Project
metadata:
name: druid-operator
description: ""
spec:
visibility: private
repository:
url: git@github.com:confluentinc/druid-operator.git
run_on:
- branches
- tags
- pull_requests
pipeline_file: .semaphore/semaphore.yml
integration_type: github_app
status:
pipeline_files:
- path: .semaphore/semaphore.yml
level: pipeline
whitelist:
branches:
- master
- cc-druid-operator
custom_permissions: true
debug_permissions:
- empty
- default_branch
- non_default_branch
- pull_request
- forked_pull_request
- tag
attach_permissions:
- default_branch
- non_default_branch
- pull_request
- forked_pull_request
- tag
21 changes: 21 additions & 0 deletions .semaphore/semaphore.yml
@@ -0,0 +1,21 @@
version: v1.0
name: druid-operator
agent:
machine:
type: s1-prod-ubuntu20-04-amd64-1

blocks:
- name: Build, Test
task:
prologue:
commands:
- sem-version go 1.19
- checkout
jobs:
- name: Build, Test
commands:
- make build
- make test
- make lint
- make template
- make docker-build
2 changes: 1 addition & 1 deletion Makefile
Expand Up @@ -16,7 +16,7 @@ SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec

.PHONY: all
all: build
all: build test lint template docker-build

##@ General

Expand Down
9 changes: 8 additions & 1 deletion apis/druid/v1alpha1/druid_types.go
Expand Up @@ -91,9 +91,16 @@ type DruidSpec struct {
// +optional
DeleteOrphanPvc bool `json:"deleteOrphanPvc"`

// Required: path to druid start script to be run on container start
// Required: Command to be run on container start
StartScript string `json:"startScript"`

// Optional: bash/sh entry arg. Set startScript to `sh` or `bash` to customize entryArg
// For example, the container can run `sh -c "${EntryArg} && ${DruidScript} {nodeType}"`
EntryArg string `json:"entryArg,omitempty"`

// Optional: Customized druid shell script path. If not set, the default would be "bin/run-druid.sh"
DruidScript string `json:"druidScript,omitempty"`

// Required here or at nodeSpec level
Image string `json:"image,omitempty"`

Expand Down
4 changes: 4 additions & 0 deletions chart/templates/crds/druid.apache.org_druids.yaml
Expand Up @@ -8847,6 +8847,10 @@ spec:
description: 'Required: path to druid start script to be run on container
start'
type: string
entryArg:
type: string
druidScript:
type: string
startUpProbe:
description: 'Optional: StartupProbe for nodeSpec'
properties:
Expand Down
21 changes: 19 additions & 2 deletions controllers/druid/handler.go
Expand Up @@ -9,6 +9,7 @@ import (
"fmt"
"regexp"
"sort"
"strings"

autoscalev2beta2 "k8s.io/api/autoscaling/v2beta2"
networkingv1 "k8s.io/api/networking/v1"
Expand Down Expand Up @@ -1138,6 +1139,21 @@ func getVolume(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid, nodeSpecUniq
return volumesHolder
}

func getCommand(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid) []string {
if m.Spec.StartScript != "" && m.Spec.EntryArg != "" {
return []string{m.Spec.StartScript}
}
return []string{firstNonEmptyStr(m.Spec.StartScript, "bin/run-druid.sh"), nodeSpec.NodeType}
}

func getEntryArg(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid) []string {
if m.Spec.EntryArg != "" {
bashCommands := strings.Join([]string{m.Spec.EntryArg, "&&", firstNonEmptyStr(m.Spec.DruidScript, "bin/run-druid.sh"), nodeSpec.NodeType}, " ")
return []string{"-c", bashCommands}
}
return nil
}

func getEnv(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid, configMapSHA string) []v1.EnvVar {
envHolder := firstNonNilValue(nodeSpec.Env, m.Spec.Env).([]v1.EnvVar)
// enables to do the trick to force redeployment in case of configmap changes.
Expand Down Expand Up @@ -1308,7 +1324,8 @@ func makePodSpec(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid, nodeSpecUn
v1.Container{
Image: firstNonEmptyStr(nodeSpec.Image, m.Spec.Image),
Name: fmt.Sprintf("%s", nodeSpecUniqueStr),
Command: []string{firstNonEmptyStr(m.Spec.StartScript, "bin/run-druid.sh"), nodeSpec.NodeType},
Command: getCommand(nodeSpec, m),
Args: getEntryArg(nodeSpec, m),
ImagePullPolicy: v1.PullPolicy(firstNonEmptyStr(string(nodeSpec.ImagePullPolicy), string(m.Spec.ImagePullPolicy))),
Ports: nodeSpec.Ports,
Resources: nodeSpec.Resources,
Expand Down Expand Up @@ -1345,7 +1362,7 @@ func makePodSpec(nodeSpec *v1alpha1.DruidNodeSpec, m *v1alpha1.Druid, nodeSpecUn
}

spec := v1.PodSpec{
NodeSelector: firstNonNilValue(m.Spec.NodeSelector, nodeSpec.NodeSelector).(map[string]string),
NodeSelector: firstNonNilValue(nodeSpec.NodeSelector, m.Spec.NodeSelector).(map[string]string),
TopologySpreadConstraints: getTopologySpreadConstraints(nodeSpec),
Tolerations: getTolerations(nodeSpec, m),
Affinity: getAffinity(nodeSpec, m),
Expand Down
14 changes: 1 addition & 13 deletions deploy/operator.yaml
Expand Up @@ -19,19 +19,7 @@ spec:
image: druidio/druid-operator:latest
command:
- /manager
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
imagePullPolicy: IfNotPresent
env:
- name: WATCH_NAMESPACE
valueFrom:
Expand Down
5 changes: 1 addition & 4 deletions examples/tiny-cluster-zk.yaml
Expand Up @@ -53,11 +53,8 @@ spec:
- containerPort: 3888
name: zk-elec-port
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 1
cpu: 100m
memory: 512Mi
volumeMounts:
- mountPath: /data
Expand Down
127 changes: 99 additions & 28 deletions examples/tiny-cluster.yaml
Expand Up @@ -6,7 +6,7 @@ kind: "Druid"
metadata:
name: tiny-cluster
spec:
image: apache/druid:0.22.1
image: confluent-docker.jfrog.io/confluentinc/cc-druid:v1.202.0
# Optionally specify image for all nodes. Can be specify on nodes also
# imagePullSecrets:
# - name: tutu
Expand All @@ -31,7 +31,7 @@ spec:
commonConfigMountPath: "/opt/druid/conf/druid/cluster/_common"
jvm.options: |-
-server
-XX:MaxDirectMemorySize=10240g
-XX:MaxDirectMemorySize=1g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Dlog4j.debug
Expand All @@ -52,26 +52,46 @@ spec:
</Loggers>
</Configuration>
common.runtime.properties: |
druid.startup.logging.logProperties=true
druid.sql.enable=true
# Zookeeper
druid.zk.service.host=tiny-cluster-zk-0.tiny-cluster-zk
druid.zk.paths.base=/druid
druid.zk.service.compress=false
# Metadata Store
druid.metadata.storage.type=derby
druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/druid/data/derbydb/metadata.db;create=true
druid.metadata.storage.connector.host=localhost
druid.metadata.storage.connector.port=1527
druid.metadata.storage.connector.createTables=true
# druid.metadata.storage.type=derby
# druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/druid/data/derbydb/metadata.db;create=true
# druid.metadata.storage.connector.host=localhost
# druid.metadata.storage.connector.port=1527
# druid.metadata.storage.connector.createTables=true
druid.metadata.storage.type=postgresql
druid.metadata.storage.connector.connectURI=jdbc:postgresql://druid-metadata-postgresql.tiny-cluster.svc.cluster.local:5432/druiddb
druid.metadata.storage.connector.user=druidpguser
druid.metadata.storage.connector.password=druidpgpassword
# Deep Storage
druid.storage.type=local
druid.storage.storageDirectory=/druid/deepstorage
# druid.storage.type=local
# druid.storage.storageDirectory=/druid/deepstorage
druid.storage.type=s3
druid.storage.bucket=druidio
druid.storage.baseKey=local/segments
# Indexing service logs
druid.indexer.logs.type=s3
druid.indexer.logs.s3Bucket=druidio
druid.indexer.logs.s3Prefix=local/indexing-logs
druid.s3.protocol=http
druid.s3.endpoint.url=minio.minio.svc.cluster.local:9000
druid.s3.enablePathStyleAccess=true
druid.s3.accessKey=AKIAIOSFODNN7EXAMPLE
druid.s3.secretKey=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
#
# Extensions
#
druid.extensions.loadList=["druid-kafka-indexing-service"]
druid.extensions.loadList=["postgresql-metadata-storage", "druid-s3-extensions", "druid-kafka-indexing-service", "druid-datasketches", "druid-basic-security", "druid-opencensus-extensions", "statsd-emitter", "confluent-extensions"]
#
# Service discovery
Expand Down Expand Up @@ -259,6 +279,8 @@ spec:
path: /tmp/druid/deepstorage
type: DirectoryOrCreate
env:
- name: AWS_REGION
value: us-east-1
- name: POD_NAME
valueFrom:
fieldRef:
Expand Down Expand Up @@ -292,6 +314,10 @@ spec:
extra.jvm.options: |-
-Xmx512M
-Xms512M
resources:
requests:
memory: "600Mi"
cpu: "200m"

coordinators:
# Optionally specify for running coordinator as Deployment
Expand All @@ -315,6 +341,10 @@ spec:
extra.jvm.options: |-
-Xmx512M
-Xms512M
resources:
requests:
memory: "600Mi"
cpu: "200m"

historicals:
nodeType: "historical"
Expand All @@ -324,37 +354,78 @@ spec:
runtime.properties: |
druid.service=druid/historical
druid.server.http.numThreads=5
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=1
druid.processing.numThreads=1
druid.processing.buffer.sizeBytes=268435456
# Segment storage
druid.segmentCache.locations=[{\"path\":\"/druid/data/segments\",\"maxSize\":10737418240}]
druid.server.maxSize=10737418240
extra.jvm.options: |-
-Xmx512M
-Xms512M
-Xmx1G
-Xms1G
-XX:MaxDirectMemorySize=1g
resources:
requests:
memory: "2G"
cpu: "1000m"

routers:
nodeType: "router"
# Optionally specify for broker nodes
# imagePullSecrets:
# - name: tutu
druid.port: 8088
nodeConfigMountPath: "/opt/druid/conf/druid/cluster/query/router"
replicas: 1
runtime.properties: |
druid.service=druid/router
# HTTP proxy
druid.router.http.numConnections=10
druid.router.http.readTimeout=PT5M
druid.router.http.numMaxThreads=10
druid.router.managementProxy.enabled=true
# HTTP server threads
druid.router.http.numConnections=5
druid.server.http.numThreads=10
# Service discovery
druid.router.defaultBrokerServiceName=druid/broker
druid.router.coordinatorServiceName=druid/coordinator
# Management proxy to coordinator / overlord: required for unified web console.
druid.router.managementProxy.enabled=true
# Processing threads and buffers
druid.processing.buffer.sizeBytes=1
druid.processing.numMergeBuffers=1
druid.processing.numThreads=1
extra.jvm.options: |-
-Xmx512M
-Xms512M
-Xmx512m
-Xms512m
resources:
requests:
memory: "600Mi"
cpu: "200m"
ingress:
rules:
- host: druid.tiny.test
http:
paths:
- path: /
backend:
serviceName: druid-tiny-cluster-routers
servicePort: 8088

middlemanagers:
nodeConfigMountPath: "/opt/druid/conf/druid/cluster/data/middleManager"
nodeType: "middleManager"
druid.port: 8091
replicas: 1
runtime.properties: |-
druid.service=druid/middleManager
druid.worker.capacity=1
druid.indexer.runner.javaOpts=-server -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/druid/data/tmp -Dlog4j.debug -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=50 -XX:GCLogFileSize=10m -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Xloggc:/druid/data/logs/peon.gc.%t.%p.log -XX:HeapDumpPath=/druid/data/logs/peon.%t.%p.hprof -Xms1G -Xmx1G
druid.indexer.task.baseTaskDir=/druid/data/baseTaskDir
druid.server.http.numThreads=10
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=268435456
druid.indexer.fork.property.druid.processing.numMergeBuffers=1
druid.indexer.fork.property.druid.processing.numThreads=1
extra.jvm.options: |-
-Xmx1G
-Xms1G
readinessProbe:
httpGet:
path: /status/health
port: 8091
resources:
requests:
memory: "2Gi"
cpu: "400m"

0 comments on commit 4595a63

Please sign in to comment.