Skip to content

Commit

Permalink
Renaming to cloud-on-k8s (#760)
Browse files Browse the repository at this point in the history
  • Loading branch information
thbkrkr committed May 7, 2019
1 parent 48758a0 commit 0c1193b
Show file tree
Hide file tree
Showing 374 changed files with 1,160 additions and 1,160 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Expand Up @@ -7,5 +7,5 @@ attention.
-->

- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?
- Have you followed the [contributor guidelines](https://github.com/elastic/k8s-operators/tree/master/CONTRIBUTING.md)?
- Have you followed the [contributor guidelines](https://github.com/elastic/cloud-on-k8s/tree/master/CONTRIBUTING.md)?
- If you submit code, is your pull request against master? We recommend pull requests against master. We will backport them as needed.
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Expand Up @@ -17,11 +17,11 @@ The goal of this document is to provide a high-level overview on how you can get

## Report your bugs

If you find an issue, check first our [list of issues](https://github.com/elastic/k8s-operators/issues). If your problem has not been reported yet, open a new issue, add a detailed description on how to reproduce the problem and complete it with any additional information that might help solving the issue.
If you find an issue, check first our [list of issues](https://github.com/elastic/cloud-on-k8s/issues). If your problem has not been reported yet, open a new issue, add a detailed description on how to reproduce the problem and complete it with any additional information that might help solving the issue.

## Set up your development environment

Check requirements and steps in this [README](https://github.com/elastic/k8s-operators/blob/master/operators/README.md).
Check requirements and steps in this [README](https://github.com/elastic/cloud-on-k8s/blob/master/operators/README.md).

## Contribute with your code

Expand Down Expand Up @@ -97,6 +97,6 @@ Here are some good practices for a good pull request:

## Design documents

We keep track of architectural decisions through the [architectural decision records](https://adr.github.io/). All records must apply the [Markdown Architectural Decision Records](https://adr.github.io/madr/) format. We recommend to read [these documents](https://github.com/elastic/k8s-operators/tree/master/docs/design) to understand the technical choices that we make.
We keep track of architectural decisions through the [architectural decision records](https://adr.github.io/). All records must apply the [Markdown Architectural Decision Records](https://adr.github.io/madr/) format. We recommend to read [these documents](https://github.com/elastic/cloud-on-k8s/tree/master/docs/design) to understand the technical choices that we make.

Thank you for taking the time to contribute.
58 changes: 29 additions & 29 deletions build/ci/Makefile
Expand Up @@ -5,7 +5,7 @@
# This Makefile is mostly used for continuous integration.

ROOT_DIR = $(CURDIR)/../..
GO_MOUNT_PATH ?= /go/src/github.com/elastic/k8s-operators
GO_MOUNT_PATH ?= /go/src/github.com/elastic/cloud-on-k8s

VAULT_GKE_CREDS_SECRET ?= secret/cloud-team/cloud-ci/ci-gcp-k8s-operator
GKE_CREDS_FILE ?= credentials.json
Expand All @@ -18,60 +18,60 @@ check-license-header:
# login to vault and retrieve gke creds into $GKE_CREDS_FILE
vault-gke-creds:
VAULT_TOKEN=$$(vault write -field=token auth/approle/login role_id=$(VAULT_ROLE_ID) secret_id=$(VAULT_SECRET_ID)) \
vault read \
-address=$(VAULT_ADDR) \
-field=service-account \
$(VAULT_GKE_CREDS_SECRET) \
> $(GKE_CREDS_FILE)
vault read \
-address=$(VAULT_ADDR) \
-field=service-account \
$(VAULT_GKE_CREDS_SECRET) \
> $(GKE_CREDS_FILE)

# reads Elastic public key from Vault into $PUBLIC_KEY_FILE
vault-public-key:
VAULT_TOKEN=$$(vault write -field=token auth/approle/login role_id=$(VAULT_ROLE_ID) secret_id=$(VAULT_SECRET_ID)) \
vault read \
-address=$(VAULT_ADDR) \
-field=pubkey \
$(VAULT_PUBLIC_KEY) \
| base64 --decode \
> $(PUBLIC_KEY_FILE)
vault read \
-address=$(VAULT_ADDR) \
-field=pubkey \
$(VAULT_PUBLIC_KEY) \
| base64 --decode \
> $(PUBLIC_KEY_FILE)

## -- Job executed on all PRs

ci-pr: check-license-header
docker build -f Dockerfile -t k8s-operators-ci-pr .
docker build -f Dockerfile -t cloud-on-k8s-ci-pr .
docker run --rm -t \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(ROOT_DIR):$(GO_MOUNT_PATH) \
-w $(GO_MOUNT_PATH) \
-e "IMG_SUFFIX=-ci" \
--net=host \
k8s-operators-ci-pr \
cloud-on-k8s-ci-pr \
bash -c \
"make -C operators ci && \
make -C local-volume ci"

## -- Release job

ci-release: vault-gke-creds
docker build -f Dockerfile -t k8s-operators-ci-release .
docker build -f Dockerfile -t cloud-on-k8s-ci-release .
docker run --rm -t \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(ROOT_DIR):$(GO_MOUNT_PATH) \
-w $(GO_MOUNT_PATH) \
-e "USER=random_fellow" \
-e "TAG=${TAG_NAME}" \
-e "IMG_SUFFIX=-release" \
-e "REGISTRY=${REGISTRY}" \
-e "REPOSITORY=${GCLOUD_PROJECT}" \
-e "GCLOUD_PROJECT=${GCLOUD_PROJECT}" \
-e "GKE_SERVICE_ACCOUNT_KEY_FILE=$(GO_MOUNT_PATH)/build/ci/$(GKE_CREDS_FILE)" \
k8s-operators-ci-release \
bash -c "make -C operators ci-release"
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(ROOT_DIR):$(GO_MOUNT_PATH) \
-w $(GO_MOUNT_PATH) \
-e "USER=random_fellow" \
-e "TAG=${TAG_NAME}" \
-e "IMG_SUFFIX=-release" \
-e "REGISTRY=${REGISTRY}" \
-e "REPOSITORY=${GCLOUD_PROJECT}" \
-e "GCLOUD_PROJECT=${GCLOUD_PROJECT}" \
-e "GKE_SERVICE_ACCOUNT_KEY_FILE=$(GO_MOUNT_PATH)/build/ci/$(GKE_CREDS_FILE)" \
cloud-on-k8s-ci-release \
bash -c "make -C operators ci-release"

## -- End-to-end tests job

# Spawn a k8s cluster, and run e2e tests against it
ci-e2e: vault-gke-creds
docker build -f Dockerfile -t k8s-operators-ci-e2e .
docker build -f Dockerfile -t cloud-on-k8s-ci-e2e .
docker run --rm -t \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(ROOT_DIR):$(GO_MOUNT_PATH) \
Expand All @@ -82,5 +82,5 @@ ci-e2e: vault-gke-creds
-e "REPOSITORY=$(GCLOUD_PROJECT)" \
-e "GKE_CLUSTER_NAME=e2e-qa-$(shell date +'%Y%m%d-%H%M%S')" \
-e "GKE_SERVICE_ACCOUNT_KEY_FILE=$(GO_MOUNT_PATH)/build/ci/$(GKE_CREDS_FILE)" \
k8s-operators-ci-e2e \
cloud-on-k8s-ci-e2e \
bash -c "make -C operators ci-e2e GKE_MACHINE_TYPE=n1-standard-8"
2 changes: 1 addition & 1 deletion docs/design/0002-global-operator/0002-global-operator.md
Expand Up @@ -59,7 +59,7 @@ Allow for a hybrid approach where it is possible to enable the components of bot

## Decision Outcome

Superseded by [005](https://github.com/elastic/k8s-operators/blob/master/docs/design/0005-configurable-operator.md).
Superseded by [005](https://github.com/elastic/cloud-on-k8s/blob/master/docs/design/0005-configurable-operator.md).

### Positive Consequences <!-- optional -->

Expand Down
8 changes: 4 additions & 4 deletions docs/design/0005-configurable-operator.md
@@ -1,7 +1,7 @@
# 5. Configurable operator and RBAC permissions

* Status: proposed
* Deciders: k8s-operators team
* Deciders: cloud-on-k8s team
* Date: 2019-02-13

## Context and Problem Statement
Expand Down Expand Up @@ -37,7 +37,7 @@ The second option (namespace operator) also has some major drawbacks:

### Option 1: global and namespace operators

[In a previous design proposal](https://github.com/elastic/k8s-operators/blob/master/docs/design/0002-global-operator/0002-global-operator.md), we introduced the concepts of one global and several namespace operators.
[In a previous design proposal](https://github.com/elastic/cloud-on-k8s/blob/master/docs/design/0002-global-operator/0002-global-operator.md), we introduced the concepts of one global and several namespace operators.

The global operator deployed cluster-wide responsible for high-level cross-cluster features (CCR, CCS, enterprise licenses).
Namespace operators are responsible for managing clusters in a single namespace. There might be several namespace operators running on a single cluster.
Expand Down Expand Up @@ -164,5 +164,5 @@ Cons:

## Links

* [Discussion issue](https://github.com/elastic/k8s-operators/issues/374)
* [Global operator ADR](https://github.com/elastic/k8s-operators/blob/master/docs/design/0002-global-operator/0002-global-operator.md)
* [Discussion issue](https://github.com/elastic/cloud-on-k8s/issues/374)
* [Global operator ADR](https://github.com/elastic/cloud-on-k8s/blob/master/docs/design/0002-global-operator/0002-global-operator.md)
6 changes: 3 additions & 3 deletions docs/design/0006-certificate-management.md
Expand Up @@ -76,7 +76,7 @@ Several options considered involving an init container in the ES pod:
* *Option B*: the init container requests the operator through an API in the operator to send the CSR. Disadvantage: similar to Option A, we implicitly authorize the ES pod to reach the operator. Even though this can be restricted to a single endpoint, it's an additional possible flow that could lead to security issues.
* *Option C*: the init container runs an HTTP server to serve the generated CSR. The operator requests the CSR through this API. Advantage: the pod does not need to reach any other service. Disadvantage: some additional complexity in the design and the implementation.

Option C is the chosen one here. For more details on the actual workflow, see the [cert-initializer README](https://github.com/elastic/k8s-operators/blob/master/operators/cmd/cert-initializer/README.md).
Option C is the chosen one here. For more details on the actual workflow, see the [cert-initializer README](https://github.com/elastic/cloud-on-k8s/blob/master/operators/cmd/cert-initializer/README.md).

#### Pods certificate rotation

Expand Down Expand Up @@ -169,5 +169,5 @@ Even though choosing option 1 by default, option 3 could also be handled through

## Links

* [cert-initializer README](https://github.com/elastic/k8s-operators/blob/master/operators/cmd/cert-initializer/README.md) describing interactions between the operator and the cert-initializer init container.
* [coordination with Kibana](https://github.com/elastic/k8s-operators/issues/118)
* [cert-initializer README](https://github.com/elastic/cloud-on-k8s/blob/master/operators/cmd/cert-initializer/README.md) describing interactions between the operator and the cert-initializer init container.
* [coordination with Kibana](https://github.com/elastic/cloud-on-k8s/issues/118)
4 changes: 2 additions & 2 deletions docs/design/0006-sidecar-health.md
@@ -1,7 +1,7 @@
# 6. Elasticsearch sidecar health

* Status: proposed
* Deciders: k8s-operators team
* Deciders: cloud-on-k8s team
* Date: 2019-03-05

## Context and Problem Statement
Expand Down Expand Up @@ -122,6 +122,6 @@ seen secret revison y.

## Links

* [Discussion issue](https://github.com/elastic/k8s-operators/issues/432)
* [Discussion issue](https://github.com/elastic/cloud-on-k8s/issues/432)
* https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
* https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html
6 changes: 3 additions & 3 deletions docs/design/0007-local-volume-total-capacity.md
Expand Up @@ -6,7 +6,7 @@

## Context and Problem Statement

Our current [dynamic provisioner for local volumes](https://github.com/elastic/k8s-operators/tree/master/local-volume) does not handle maximum storage available on nodes. It means a pod can get assigned to a node for which we'd need to create a PersistentVolume, even though the physical disk might be full already.
Our current [dynamic provisioner for local volumes](https://github.com/elastic/cloud-on-k8s/tree/master/local-volume) does not handle maximum storage available on nodes. It means a pod can get assigned to a node for which we'd need to create a PersistentVolume, even though the physical disk might be full already.

The way it currently works is the following:

Expand Down Expand Up @@ -132,6 +132,6 @@ Cons:

## Links <!-- optional -->

* [Elastic dynamic provisioner for local volumes](https://github.com/elastic/k8s-operators/tree/master/local-volume)
* [Elastic dynamic provisioner for local volumes](https://github.com/elastic/cloud-on-k8s/tree/master/local-volume)
* [Kubernetes static local volume provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)
* [Local volume initial issue](https://github.com/elastic/k8s-operators/issues/108)
* [Local volume initial issue](https://github.com/elastic/cloud-on-k8s/issues/108)
2 changes: 1 addition & 1 deletion docs/design/0008-volume-management.md
@@ -1,7 +1,7 @@
# 8. Volume Management in case of disruption

* Status: proposed
* Deciders: k8s-operators team
* Deciders: cloud-on-k8s team
* Date: 2019-03-08

## Context and Problem Statement
Expand Down
8 changes: 4 additions & 4 deletions docs/design/0009-pod-reuse-es-restart.md
@@ -1,7 +1,7 @@
# Reusing pods by restarting the ES process with a new configuration

* Status: proposed
* Deciders: k8s-operators team
* Deciders: cloud-on-k8s team
* Date: 2019-03-20

## Context and Problem Statement
Expand All @@ -25,7 +25,7 @@ Reusing pods may also be useful in other situations, where simply restarting Ela

## Considered Options

There's a single option outlined in this proposal. [This issue](https://github.com/elastic/k8s-operators/issues/454) contains other draft algorithm implementations.
There's a single option outlined in this proposal. [This issue](https://github.com/elastic/cloud-on-k8s/issues/454) contains other draft algorithm implementations.

Other options that are considered not good enough:

Expand Down Expand Up @@ -205,7 +205,7 @@ Chosen option: option 1, because that's the only one we have here? :)

## Links

* [https://github.com/elastic/k8s-operators/issues/454] Full cluster restart issue
* [https://github.com/elastic/k8s-operators/issues/453] Basic license support issue
* [https://github.com/elastic/cloud-on-k8s/issues/454] Full cluster restart issue
* [https://github.com/elastic/cloud-on-k8s/issues/453] Basic license support issue
* [https://www.elastic.co/guide/en/elasticsearch/reference/current/restart-upgrade.html] Elasticsearch full cluster restart upgrade
* [https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html] Elasticsearch rolling cluster restart upgrade
8 changes: 4 additions & 4 deletions docs/quickstart.md
Expand Up @@ -22,13 +22,13 @@ You will learn how to:
1. Install [custom resource definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), to extend the apiserver with additional resources:

```bash
kubectl apply -f https://raw.githubusercontent.com/elastic/k8s-operators/master/operators/config/crds.yaml
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/master/operators/config/crds.yaml
```

2. Install the operator with its RBAC rules:

```bash
kubectl apply -f https://raw.githubusercontent.com/elastic/k8s-operators/master/operators/config/all-in-one.yaml
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/master/operators/config/all-in-one.yaml
```

3. Monitor the operator logs:
Expand Down Expand Up @@ -229,7 +229,7 @@ EOF
To secure your production-grade Elasticsearch deployment, you can:

* Use XPack security for encryption and authentication (TODO: link here to a tutorial on how to manipulate certs and auth)
* Set up an ingress proxy layer ([example using NGINX](https://github.com/elastic/k8s-operators/blob/master/operators/config/samples/ingress/nginx-ingress.yaml))
* Set up an ingress proxy layer ([example using NGINX](https://github.com/elastic/cloud-on-k8s/blob/master/operators/config/samples/ingress/nginx-ingress.yaml))

### Use persistent storage

Expand Down Expand Up @@ -265,7 +265,7 @@ spec:

To aim for the best performance, the operator supports persistent volumes local to each node. For more details, see:

* [elastic local volume dynamic provisioner](https://github.com/elastic/k8s-operators/tree/master/local-volume) to setup dynamic local volumes based on LVM
* [elastic local volume dynamic provisioner](https://github.com/elastic/cloud-on-k8s/tree/master/local-volume) to setup dynamic local volumes based on LVM
* [kubernetes-sigs local volume static provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) to setup static local volumes

### Additional features
Expand Down
6 changes: 3 additions & 3 deletions local-volume/Dockerfile
@@ -1,7 +1,7 @@
FROM golang:1.11 as builder

# Build
WORKDIR /go/src/github.com/elastic/k8s-operators/local-volume
WORKDIR /go/src/github.com/elastic/cloud-on-k8s/local-volume

COPY vendor/ vendor/
COPY pkg/ pkg/
Expand All @@ -15,8 +15,8 @@ RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 && \

# Copy artefacts
WORKDIR /app/
RUN cp /go/src/github.com/elastic/k8s-operators/local-volume/bin/* . && \
cp /go/src/github.com/elastic/k8s-operators/local-volume/scripts/* . && \
RUN cp /go/src/github.com/elastic/cloud-on-k8s/local-volume/bin/* . && \
cp /go/src/github.com/elastic/cloud-on-k8s/local-volume/scripts/* . && \
rm -r /go/src/

# --
Expand Down
2 changes: 1 addition & 1 deletion local-volume/README.md
Expand Up @@ -56,7 +56,7 @@ kubectl apply -f config/pvc-sample.yaml -f config/pod-sample.yaml

## Architecture

![architecture](https://github.com/elastic/k8s-operators/blob/master/local-volume/architecture.svg)
![architecture](https://github.com/elastic/cloud-on-k8s/blob/master/local-volume/architecture.svg)

The provisioner only interacts with the APIServer: it watches any new PVC matching our StorageClass provisioner, and dynamically creates a matching PV.

Expand Down
2 changes: 1 addition & 1 deletion local-volume/cmd/driverclient/init.go
Expand Up @@ -7,7 +7,7 @@ package main
import (
"fmt"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/client"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/client"
"github.com/spf13/cobra"
)

Expand Down
2 changes: 1 addition & 1 deletion local-volume/cmd/driverclient/mount.go
Expand Up @@ -7,7 +7,7 @@ package main
import (
"fmt"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/client"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/client"
"github.com/spf13/cobra"
)

Expand Down
2 changes: 1 addition & 1 deletion local-volume/cmd/driverclient/unmount.go
Expand Up @@ -7,7 +7,7 @@ package main
import (
"fmt"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/client"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/client"
"github.com/spf13/cobra"
)

Expand Down
10 changes: 5 additions & 5 deletions local-volume/cmd/driverdaemon/main.go
Expand Up @@ -9,11 +9,11 @@ import (
"os"
"strings"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/daemon"
"github.com/elastic/k8s-operators/local-volume/pkg/driver/daemon/cmdutil"
"github.com/elastic/k8s-operators/local-volume/pkg/driver/daemon/drivers"
"github.com/elastic/k8s-operators/local-volume/pkg/driver/daemon/drivers/bindmount"
"github.com/elastic/k8s-operators/local-volume/pkg/driver/daemon/drivers/lvm"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/daemon"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/daemon/cmdutil"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/daemon/drivers"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/daemon/drivers/bindmount"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/daemon/drivers/lvm"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/viper"
Expand Down
2 changes: 1 addition & 1 deletion local-volume/cmd/provisioner/main.go
Expand Up @@ -9,7 +9,7 @@ import (
"fmt"
"os"

"github.com/elastic/k8s-operators/local-volume/pkg/provisioner"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/provisioner"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
Expand Down
2 changes: 1 addition & 1 deletion local-volume/pkg/driver/client/caller.go
Expand Up @@ -14,7 +14,7 @@ import (
"net/http"
"path"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/protocol"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/protocol"
)

const networkProtocol = "unix"
Expand Down
2 changes: 1 addition & 1 deletion local-volume/pkg/driver/client/client.go
Expand Up @@ -7,7 +7,7 @@ package client
import (
"encoding/json"

"github.com/elastic/k8s-operators/local-volume/pkg/driver/protocol"
"github.com/elastic/cloud-on-k8s/local-volume/pkg/driver/protocol"
)

// Init performs a call to the /init path using the client.
Expand Down

0 comments on commit 0c1193b

Please sign in to comment.