Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Elasticsearch example to remove use of secrets #12621

Merged
merged 1 commit into from Aug 18, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions examples/elasticsearch/Makefile
@@ -1,8 +1,8 @@
.PHONY: elasticsearch_discovery build push all

TAG = 1.0
TAG = 1.1

build:
build: elasticsearch_discovery
docker build -t kubernetes/elasticsearch:$(TAG) .

push:
Expand Down
149 changes: 71 additions & 78 deletions examples/elasticsearch/README.md
Expand Up @@ -42,9 +42,7 @@ image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running
label selector. The detected instances are used to form a list of peer hosts which
are used as part of the unicast discovery mechanism for Elasticsearch. The detection
of the peer nodes is done by a program which communicates with the Kubernetes API
server to get a list of matching Elasticsearch pods. To enable authenticated
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
with the basic authentication username and password.
server to get a list of matching Elasticsearch pods.

Here is an example replication controller specification that creates 4 instances of Elasticsearch.

Expand All @@ -69,7 +67,7 @@ spec:
spec:
containers:
- name: es
image: kubernetes/elasticsearch:1.0
image: kubernetes/elasticsearch:1.1
env:
- name: "CLUSTER_NAME"
value: "mytunes-db"
Expand All @@ -82,14 +80,6 @@ spec:
containerPort: 9200
- name: es-transport
containerPort: 9300
volumeMounts:
- name: apiserver-secret
mountPath: /etc/apiserver-secret
readOnly: true
volumes:
- name: apiserver-secret
secret:
secretName: apiserver-secret
```

[Download example](music-rc.yaml)
Expand All @@ -104,62 +94,44 @@ The `NAMESPACE` variable identifies the namespace
to be used to search for Elasticsearch pods and this should be the same as the namespace specified
for the replication controller (in this case `mytunes`).

Before creating pods with the replication controller a secret containing the bearer authentication token
should be set up.

<!-- BEGIN MUNGE: EXAMPLE apiserver-secret.yaml -->
Replace `NAMESPACE` with the actual namespace to be used. In this example we shall use
the namespace `mytunes`.

```yaml
kind: Namespace
apiVersion: v1
kind: Secret
metadata:
name: apiserver-secret
namespace: NAMESPACE
data:
token: "TOKEN"
name: mytunes
labels:
name: mytunes
```

[Download example](apiserver-secret.yaml)
<!-- END MUNGE: EXAMPLE apiserver-secret.yaml -->

Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
versions of the bearer token reported by `kubectl config view` e.g.
First, let's create the namespace:

```console
$ kubectl config view
...
- name: kubernetes-logging_kubernetes-basic-auth
...
token: yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2
...
$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
$ kubectl create -f examples/elasticsearch/mytunes-namespace.yaml
namespaces/mytunes
```

resulting in the file:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: apiserver-secret
namespace: mytunes
data:
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
```

which can be used to create the secret in your namespace:
Now you are ready to create the replication controller which will then create the pods:

```console
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
secrets/apiserver-secret
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
replicationcontrollers/music-db
```

Now you are ready to create the replication controller which will then create the pods:
Let's check to see if the replication controller and pods are running:

```console
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
replicationcontrollers/music-db
$ kubectl get rc,pods --namespace=mytunes
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
music-db es kubernetes/elasticsearch:1.1 name=music-db 4
NAME READY STATUS RESTARTS AGE
music-db-5p46b 1/1 Running 0 34s
music-db-8re0f 1/1 Running 0 34s
music-db-eq8j0 1/1 Running 0 34s
music-db-uq5px 1/1 Running 0 34s
```

It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
Expand Down Expand Up @@ -195,29 +167,50 @@ $ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytune
services/music-server
```

Let's see what we've got:
Let's check the status of the service:

```console
$ kubectl get pods,rc,services,secrets --namespace=mytunes
$ kubectl get service --namespace=mytunes
NAME LABELS SELECTOR IP(S) PORT(S)
music-server name=music-db name=music-db 10.0.185.179 9200/TCP

```

Although this service has an IP address `10.0.185.179` internal to the cluster we don't yet have
an external IP address provisioned. Let's wait a bit and try again...

```console
$ kubectl get service --namespace=mytunes
NAME LABELS SELECTOR IP(S) PORT(S)
music-server name=music-db name=music-db 10.0.185.179 9200/TCP
104.197.114.130
```

Now we have an external IP address `104.197.114.130` available for accessing the service
from outside the cluster.

Let's see what we've got:

```console
$ kubectl get pods,rc,services --namespace=mytunes
NAME READY STATUS RESTARTS AGE
music-db-cl4hw 1/1 Running 0 27m
music-db-x8dbq 1/1 Running 0 27m
music-db-xkebl 1/1 Running 0 27m
music-db-ycjim 1/1 Running 0 27m
music-db-5p46b 1/1 Running 0 7m
music-db-8re0f 1/1 Running 0 7m
music-db-eq8j0 1/1 Running 0 7m
music-db-uq5px 1/1 Running 0 7m
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
music-db es kubernetes/elasticsearch:1.0 name=music-db 4
NAME LABELS SELECTOR IP(S) PORT(S)
music-server name=music-db name=music-db 10.0.45.177 9200/TCP
104.197.12.157
NAME TYPE DATA
apiserver-secret Opaque 1
music-db es kubernetes/elasticsearch:1.1 name=music-db 4
NAME LABELS SELECTOR IP(S) PORT(S)
music-server name=music-db name=music-db 10.0.185.179 9200/TCP
104.197.114.130
NAME TYPE DATA
default-token-gcilu kubernetes.io/service-account-token 2
```

This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for Google Compute Engine) we can make queries via the service which will be fielded by the matching Elasticsearch pods.

```console
$ curl 104.197.12.157:9200
$ curl 104.197.114.130:9200
{
"status" : 200,
"name" : "Warpath",
Expand All @@ -231,7 +224,7 @@ $ curl 104.197.12.157:9200
},
"tagline" : "You Know, for Search"
}
$ curl 104.197.12.157:9200
$ curl 104.197.114.130:9200
{
"status" : 200,
"name" : "Callisto",
Expand All @@ -250,7 +243,7 @@ $ curl 104.197.12.157:9200
We can query the nodes to confirm that an Elasticsearch cluster has been formed.

```console
$ curl 104.197.12.157:9200/_nodes?pretty=true
$ curl 104.197.114.130:9200/_nodes?pretty=true
{
"cluster_name" : "mytunes-db",
"nodes" : {
Expand Down Expand Up @@ -299,22 +292,22 @@ $ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytune
scaled
$ kubectl get pods --namespace=mytunes
NAME READY STATUS RESTARTS AGE
music-db-063vy 1/1 Running 0 38s
music-db-5ej4e 1/1 Running 0 38s
music-db-dl43y 1/1 Running 0 38s
music-db-lw1lo 1/1 Running 0 1m
music-db-s8hq2 1/1 Running 0 38s
music-db-t98iw 1/1 Running 0 38s
music-db-u1ru3 1/1 Running 0 38s
music-db-wnss2 1/1 Running 0 1m
music-db-x7j2w 1/1 Running 0 1m
music-db-zjqyv 1/1 Running 0 1m
music-db-0n8rm 0/1 Running 0 9s
music-db-4izba 1/1 Running 0 9s
music-db-5dqes 0/1 Running 0 9s
music-db-5p46b 1/1 Running 0 10m
music-db-8re0f 1/1 Running 0 10m
music-db-eq8j0 1/1 Running 0 10m
music-db-p9ajw 0/1 Running 0 9s
music-db-p9u1k 1/1 Running 0 9s
music-db-rav1q 0/1 Running 0 9s
music-db-uq5px 1/1 Running 0 10m
```

Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:

```console
$ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
$ curl 104.197.114.130:9200/_nodes?pretty=true | grep name
"cluster_name" : "mytunes-db",
"name" : "Killraven",
"name" : "Killraven",
Expand Down Expand Up @@ -371,4 +364,4 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true | grep name

<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/elasticsearch/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
<!-- END MUNGE: GENERATED_ANALYTICS -->
8 changes: 0 additions & 8 deletions examples/elasticsearch/apiserver-secret.yaml

This file was deleted.

20 changes: 1 addition & 19 deletions examples/elasticsearch/elasticsearch_discovery.go
Expand Up @@ -19,7 +19,6 @@ package main
import (
"flag"
"fmt"
"os"
"strings"
"time"

Expand All @@ -31,35 +30,18 @@ import (
)

var (
token = flag.String("token", "", "Bearer token for authentication to the API server.")
server = flag.String("server", "", "The address and port of the Kubernetes API server")
namespace = flag.String("namespace", api.NamespaceDefault, "The namespace containing Elasticsearch pods")
selector = flag.String("selector", "", "Selector (label query) for selecting Elasticsearch pods")
)

func main() {
flag.Parse()
glog.Info("Elasticsearch discovery")
apiServer := *server
if apiServer == "" {
kubernetesService := os.Getenv("KUBERNETES_SERVICE_HOST")
if kubernetesService == "" {
glog.Fatalf("Please specify the Kubernetes server with --server")
}
apiServer = fmt.Sprintf("https://%s:%s", kubernetesService, os.Getenv("KUBERNETES_SERVICE_PORT"))
}

glog.Infof("Server: %s", apiServer)
glog.Infof("Namespace: %q", *namespace)
glog.Infof("selector: %q", *selector)

config := client.Config{
Host: apiServer,
BearerToken: *token,
Insecure: true,
}

c, err := client.New(&config)
c, err := client.NewInCluster()
if err != nil {
glog.Fatalf("Failed to make client: %v", err)
}
Expand Down
11 changes: 2 additions & 9 deletions examples/elasticsearch/music-rc.yaml
Expand Up @@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: es
image: kubernetes/elasticsearch:1.0
image: kubernetes/elasticsearch:1.1
env:
- name: "CLUSTER_NAME"
value: "mytunes-db"
Expand All @@ -29,11 +29,4 @@ spec:
containerPort: 9200
- name: es-transport
containerPort: 9300
volumeMounts:
- name: apiserver-secret
mountPath: /etc/apiserver-secret
readOnly: true
volumes:
- name: apiserver-secret
secret:
secretName: apiserver-secret

6 changes: 6 additions & 0 deletions examples/elasticsearch/mytunes-namespace.yaml
@@ -0,0 +1,6 @@
kind: Namespace
apiVersion: v1
metadata:
name: mytunes
labels:
name: mytunes
6 changes: 3 additions & 3 deletions examples/examples_test.go
Expand Up @@ -237,9 +237,9 @@ func TestExampleObjectSchemas(t *testing.T) {
"dapi-pod": &api.Pod{},
},
"../examples/elasticsearch": {
"apiserver-secret": nil,
"music-rc": &api.ReplicationController{},
"music-service": &api.Service{},
"mytunes-namespace": &api.Namespace{},
"music-rc": &api.ReplicationController{},
"music-service": &api.Service{},
},
"../examples/explorer": {
"pod": &api.Pod{},
Expand Down